var/home/core/zuul-output/0000755000175000017500000000000015150370362014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015150374107015474 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000206571615150374041020267 0ustar corecore!ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9Gf܅ lEڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5θ!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,Sc̝G?ƥF%QV5pDVHwԡ/.2h{qۀK8yUOdssdMvw`21ɻ]/ƛ"@8(PN_,_0;o_x+Vy<h\dN9:bġ7 -Pwȹl;M@n̞Qj_P\ Q]GcPN;e7Vtś98m1<:|a+.:a4nՒ,]LF0);I$>ga5"f[B[fhT/ɾgm\Sj#3hEEH*Nf äE@O0~y[쾋t=iYhșC 5ܩa!ǛfGtzz*з 55E9Fa?Zk80ݞN|:AОNo;Ⱦzu\0Ac/T%;m ~S`#u.Џ1qNp&gK60nqtƅ": C@!P q]G0,d%1}Uhs;H?)M"뛲@.Cs*H _0:P.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6,֗&̴+s~x?53!}~Z[F)RH?uvͪ _5l *7h?cF_]CNnW)F5d,0SSNK9ް4:ÒozsB<^+鄌4:B%cXhK I}!5 YM%o<>"ہ)Za@Ι}YJz{ɛr|hxY/O$Zøu32EʉD'MS1}t i:Y`cФIX0$lη˽`!i:ګPSPٔ3@5;ȕ}PkڪH9' |":", 1Ҫ8 %lg&:2JC!Mjܽ#`PJWP4Q2:IGӸۡshN+60#:mufe߿~Y,iǑ wVq*T+ w%fx6 %u̩1hӰc%AYW ZY~a_6_yWf`rVA,f=A}h&V<UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O .|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{vuZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrĉF3B%Z~O/_BKCQϰԨ\uRT{/;;^u'}8H0]+ES,n?UU{ x~ʓOy_>?/>l8MrHID2VSsMX^"NۯDc558c&'K0L /C5YDqNe~ض˸nErc֋@aw*r܀0 a {RQXV-/p:MP\<=<^越a/bz?ܓvjIg3MN4:]U]STa,@OKdޕ6ql?N/e1N2i#stV \'xMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋֆ:1IC8qٞ\vXçL ]X/r}7O}Wh,h ;RQ=]u00yiC۔I^3!?H6iUH:ô 4P$rT`%2Aq-֢׍qt=@x#~0)p# ы9'iri]ͪ/@繁qVGCڤr,DihB ,m 9 _$q3= A$IC"6g^4e`Xo(D*6"^eTh'4xpFڜe'fVQ7~'c L^ԯwIڣA.}H;Ë*׬=`^ 9]r鐃 -Dfi2|QwZk‹u^6DQ1&H凎c!n[mi3)WfsF:M"uҷs.1!뾧1%s,hQs|hx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nV$tHI98/)=mͭ ڐn}}~ק?g_6WĩDRc0]rY9'z .(jHI :{HG}HDN`h7@{jnE#[dz;*3b\N7dYܞLcn3rnNd8"is"1- ޑܧdd?'xc 73\+s$N9tf{w5n%՘#TI{h5pfIWGev.F_{!7*qW}ӼتY&7!5Y'3KtpwU`<d5J{@ݜ P?&pWt|CX6|,9:N\8|ҳ uE UIL`F &ni?4~j3?m-Kھnۅm''J[/m$g4rfrvRޟ/O_·ogg]fyrd-i-Iv#GL`,Oȃ1F\$' )䉳yg=#6c+#  =J`xV,)ޖ,3~JPͪm|$oV1yU<̐t6 T m^ [IgINJ\Оf*Z"I)+>n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}q3 >6׳oƄ%VDSWn 0,qh! E-Z%ܹpU:&&fX+EǬ.ťqpNZܗÅxjsD|[,_4EqgMƒK6f/FXJRF>i XʽAQGwG% x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_g8zHT׃%C'-å8T]v/K){}@rѣ-A(FrTq1F'/)DkGU⢜'-.t$=tnn&nu [}Ab4 +OLuU{0fIb { ޹na4p9/B@Dvܫs;/f֚Znϻ-mmBz-p^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I+ػ޶%WK(~ ``'3c8g2nJ\lۚ緊dٱݲI398bWźvwI.R-l(AI|dKUar\[ jSlkǪKiXu U!JJwDYI%nJrl!*J%R!E*HaQ2lʪ|nnMegIr< '*LG-|h D<xPcU˸#4Zx\cU\H~cUe3gZ:Šy: d U ƧBORKc|x8Jg:ьiQG/i{sq=^3>-qr/!wSy5V $s ))SRbtOlѩVsƗFds+`o/,vlϋHM#l#nF9 5z1co4iʹcb\n= +Z!znvgZ?/ AaY/"z?U#&`>0vy?I7qu۱e\2ˏ3XxF:y?#Nr9'vK)S|Yກ'礧) 0oU2'x_`|:+IS`)2y b%d QNG,ZMMкJBrK..my_8W(l Y% wAy(' %G-K"?Â֩g"jʤ^Wa,CeΠ@^W8mK3Hܬ=i,æj2opz>%!2wF'ɵGVR̔P݅>Ggu)Xv 0N$0uܲI gw5WgyEGgǟn mI촿׏3OQ}⤻/ G޽N"p$>70x[G5MA<7Mq;٥+Q?RUGc^5!6 wWE_'kݟ6{UF㼪YRa5>vvY&_3Z 4!_"bkwQ|pV4!,KWHѯmvR6YΊj!k`w倰E+sA}~݃M .Dt"2hZH*@'ɜ(Zxn%2ȴib ,%bv_߳L|LA0nWw]=y"nHp5$Wi217˜eIt+V@&E SlDsQ#, u$(yJo m_ҕcԱm I w{q!XZ/"y~h`wO4N\=eA&OIY7,M!eP}$C9K?9a ¶C?#~9 {pX]bU^sv<VEѼOuRqkhn1pԹA-IP' j &[ )sȑZlԘ}33hJ\âNo AY%K /"1]J;d/̵vfxoOˆ޽O#kFñPΎBtc ܔM,bT;M eSkHRZ^'R{oϢ4{]Z769]r`ȅ'̈&{YmS"CARâѢҾ`m4ۜ(]5ۊ]s<<Ȯa>& bEH9؞r{d>!pME5Vאlh\_C*!p69@V{!T9tg#[Ub7^H])+ra6ԾIy6D<q&^ WiTc[@ OjKkkId5_:eIFЭ-X nJ%˫{X% ʳ<؝eQ6U:vy  S8_#mK,ݴZ2e)Re?L{G 4¶Bb} U݂bu eKƆp}CIÆ54Te- Z\/16B养l*q!D]:tRJ,)hMt-KSX&|N\ac^f$7=<.A#+xX-i]E0-yPŃaί,XFA|xL]E/Q{MU=5U&-Z_׍QɅGV? `j9w?E2_hۅL!aU-{M8WD6tW=ߔiTmsڇZ&h9ia+b:NJdt- V/ 0]{f9nܕj_˰iT>zkdR2-b}4Iwƒ4DKԺ&*gUR㹮@D`QR/i@iK.}[$͵ULk*x{evG!8mf,B4DIq Nk~e[RpcmIDo6Uq]M G括695uHyXEHS8m8%*Zޤ[WIDmu-FOUc\J4bRWӢVUooM\])H1m+>~ Igh?;D2H.u 1wUg-{no@^3M.hRk9JZ^u} cV~~l퀴Q=DgB6:Y(Y ,ch ^&dht]|tV5 <F{ SjZ 0\kO!`󙱀7i(]_50@[ A!t6< emX5ƿ{wʈ5iaxZTU0N'I g&T BD6J AeYa40B׻ p9i%yT"bV@e}Lw4ϥjPppyG"t@s^HIik=.iMOLN>@NwIXw?"]'c޵kB{#qL`67ig,ɉ+IACJkJٝm"C%;.eYu m @Kgo^.Hհz4aބ \E #( Oh::ŤIyMUjE[yBuE=uzXO$u,Sw)Mi@p )\ke0fo70ao.x1}Fu`G;QX~xFO "E,}"F9ڤt5%+ˡqzt[@#@ ju3 .%0w <=w?9P<i[~6gKb&Cw h,?m v9!D6ՀblB +>;kQjq= ~Jߎ][-v v_c@Dx^O lF(zj@q󩬰y uxۅ+":Y p? Ycw$h[P(놯!@٦霃Ay(C:j5 hhZjE~7Xbd űfs[ڮ(`'|j# X,5=ewq,:j' dݕJal9hJx]K\˜>_0;r4kIJnd(ҘtGp۬S;@qgO\(4h$@n"-:pi5z x <@L+];YǸW/ufu?.knNoR#= A5$pۇqA5tB' l'B'l}HԾIz7A=9rkYPU=ee`wt0NgjכC0$O Չ <+n(q[pH'PVdF> .UdkS&Ց05"fPj3XEpib߁TݚyAph Кݭ`0!46O.gs :U @Xs n}ad:u]Vu !t -;X e6w KXIO(  K  "SPeO0:Aަ9^T@:2#m0%l s~¶yg|vF}ބc}=Bǟm^8Fpݢ²)d__~;0׈Z$-`WxH׳ٴ}f.z3t?7~{=I [//>pDfє(yڲqOhVîH |l[rHL ` l|!%\ʹ)ʞ%)_jB\e/Cqhw,HeuzO[j!'>;逺Nfĝд:C/DNm7M&5orߴ\RU)qr]ӷ_.$Nj6&l,L*mG.OZ7{_n-?]ү_1n'.B?LYpVtNClYv.a,0?ϯ34F-DKރ.̣,q\E5dc·+Ѝ\HA"0ABK,OdV©3Һe=*Dq۴gz̷d6<4H͋UӒ3_ԱmwoaO&[waQ^3>tKL%Bqڢ18㣡l)p%Ôt[ky~ipd_obcuڍ(nFMTҌhq`)z1SME)>&S举㭵 U}Oé U=*h,TtY iD.$1$dr q6|~Ǩj! A `)1YYylzgcxl;.[~Z< ˁإU5  0h4m$@L"H )A} ,WC%PL(ہP;q= e2l1e{ *{䨲}43۽wG*ߓRbPbwB{*v Z&ځPkwBjI2ڻj?P{OBu uv ٝPq:{@L#ݓPwBeBv'{ޞz;/@;$@.ya,-t1KϓvB#+[u&53㔚Iv9JXa80qYvDtvg$z؇{R(/Ir#-ɨLyBUtFh{S&2V@A3ۨmգݞM{Ks. 3t9 ጼDyZ$M:Gwb^Ϸm6)ylϣqztyxEgTB_pz9#iS`hF rn1HgQ jѦq"'NP$1~i?ڦvT AeGVVN1hǨr|װ#4Oe2s4cHG2Yå?ianav|&W4- $D32Ca<Q A5~xo<8 |=$孾Q%W(/xB鏷<e*Bޝ^l<0ňTzwaH][u[K)"\lusXEVd Ht/a8Ma4wƪ-[]/{yx9fy8.~(1bYrF0FtmqJ^2NILt[¦)KEL}@͗0Y\g#l{CΏzWKY oxNh.)<614\Bï0Jb 8M'I4 ӱArxiK#R*OP{_xj'<OC0J< Mw2UQe?S)+T9:EzU2fTV U,XtZN̲,78fQߩ@k_]5kĮv+Y0`?yd^sԋ3n 'SEEWPK; <|0o8 5ZZ-KpS4QzX ?ҿʓ]mI=b& иad2(H&3j#@bN&)PnL'ZUY"ro5ܨenTrK ڍU 8l4L$${H5 Q F\' , 2LXvA+ޘ0'!zjf/No ]+nE_T^ՕKfT2B $;  Qu}U8N Gr(7XgUOq}}oʼ=wŗd]G ް^Ɍ䗊.ɧ΃"Q؋TT8x#\mJXr=wY}1thD-u)=wWuOKXo nvR3psĥ_yXlgC!S3k,'X6^~fc^*azz,'uZk2ۍ<mOfܤU1 N.UnV6"ndXgy(?!fS=RLA9Ae̓i!Ί)w:mdXhi$΁hUƓQ-2tNn<ƈ8=TjFzXVm,*yN5!*:h,ʛkh4fnLP'B'*~6na&,*e !VH&2Hy=DK]=- i(B%:PFv{Kra4_'hPʑdiŨTKkb3$P><zL):&{2926Cc>I,6GvPHfq5~0)m= m4,.q2_uejŵ<:CuđTae5~pt?c)oū R) <+龤00{ʥ2JdK8b|Vu@+숒A$:Vh35*Y`V;DoOoYΓ]X_-J9s˱.G]UJ R/"7K^J 7˗`$^ʴ|E>>1<&mu@0:Y,U:5qoP w7^X h*cOFY:)FGN|c* )UzHRt GҙKGC53.87$Ҷ6'UmE L7V5]+H1Nj|4әᬱHaeWrr)&ʵN 3%?J# ʣ/cYšxN|b.q0`|`*i J(̔FʅN׃ 0V<& tx8Sk0~6ΗRp g:8 w.VS uZ}p!k dxMwP:Whi[Rvm%p3sބ-Tr;~oa9 G9:b0";8|=cy.Fh~t٠5lJt)U#Aլ8* ǃq&xm:9ׯ>fLBE$$XlZ*t1| v- tZNzMjEmi3DgU^pc_3 f f)2%&bx"Ig807,8Z?.CM};iFk p)b`?D >Om707ߜ1B !Z nV@"X7,1^01D84&-et$ntEC-U4'{Bj>ړM}y,8hU7`f*KN)@Ơ-E/IrO Q' ࡃ$5WFR$)γh:4IKol+,adT8WR4)p\ƗGkjÔ6ȽiPQ潲 AHJ4xkuN;v>nl)LSDy7,95lXq2rb>jU$kݷ=XЈ^wR4%(iesPİsM.{Ƣw`E`rQ\~ttFQM,Ȫ!ۈ>Ȩ=T=6Tj_K:O_7G>5w"k#=;ʵX6?*OFYiNl9yr(',E+o2'O2N Ʉ+>j:kZAhbtWE_s‘D_ú9y]K5cPz*YW2~u0f]#;v}=PM[7>5[ӃXg>ś<v)YbWYZ֐}B%Q0TZ* sIohY~~AHlv/m]h[O*~#m M 4YVX'Ksv8hVL /'\PP( /}*- >`*s0Y9y*ѡۻov!瑝HFH:,8N4;#`)ɋ.%z\.#kNJkʿ9Iow"[9y'N!iCzyyfRwUzNg֮>(k|s͇NY&=8fq໚QAOܧDm6pg25HP"F3VVn3 Z绺ұ nȂ.J$>k= )WSJ|\JcqA9mgI(j>1pdbk&u{-GRQB]~גδ~e9zt6r<}jD Et5oXl/%Z!(S|lKa]݄DKӮg*(AF)k&^(B*McU{4rdƣ}ӱJdQM`{EѲ?-u B#jE9OTqfuejOW+x<@NnW$ );S&߄،QzO!ۻQiٝV[2u8?sni;b|?^Z/=&XzTF`3\E6Ets=mwx$ϥ7'BKĚ3Aln | 3Wcg3NfZA6FjEiҠe-"B9BGEBGEe߈qIxsc,MJ{lfz{JoGn$ 8QAѾ80VZ-I % kbZۘ=NC7J_ 6diUt?ٍ.ȂJ4{F(ށ|Qu rZzOw٧6٘Mμ#U5,d)})gS vłLBu, z)!.}ئUrqU+Y`VT' Ύ>x 8@jɦI-' V\޳^l_`әV,$4bPA(b a6 lDxU ,%%GR}e_|II;_?ɭ3o&^[:89ywVnG)a䶨L"Q$EɋV@w+/F6Zw4T4 ('嚹 o) 2]oP3I[sp3H̨jH\w[UhJ14aApKcs_jrRʛ# Ad4,=c8=={MXZSMdm1dGoHm02, Nn4v[$)"zoYpgg|F]1)ASE8$XV'WG.-s.\hJػFn$4l.ٵe3y 6=^-~Ev[jHrK60R,}UX$7α?xBJuq7i5,~Up뻽MOީߴDݺ7-WwOnGzk=^W[h_&eW&䯝H*,Uۯkqs}^0ua~ʙ8xIvQ0G9n0JaV\DQ״جr3K=|ftrҵdAu{Ȏ|a$pcI`ٱ2Exrogl~}WD~vddG *kggwa^0M&?"&hc{5 %ASL9k"Th$RbuQpy ՃvmlrC)l&J:ޖ /hH<~wNq5/Sify`_5X zti[[E :ו\>$?|_LgHꆴ2 8qq#aBOWy>ʫ1AL-خ]wð;Uu /.<]X3kziԝeQZ',Qnft0b>u> ei0UHrDA}ɡw!4N X1w GG &Ӣv`}BY]We?q悚jE kr쁙ܙ$K42p{9 006 ׮9v; CUվ˝ug(ZMu8s;. 9yUs t5US%HB܎fPMq. "qEdQzp`וU_R͐5E&[L2ŋ/ބ\^E]I.PRQJ*4mД4m&h (L88JiVE< He$EmIuA1 ӵz sm Z X"2E K)Cf.k g B-HmU;淛|#0 BVQ1MlAfs%(QE@nc"CR^sѰ7 <êU_-~`֖-p rJs {4_c4QߙEϬǬߏr緲#6?iG@bl+ʸ)= ?ჹoM'S`L4"1yˑ ; r=xSg hwIu)0h[FE :𘯂}04;g.GG:JA߱ReS".' ZE\6M۲;٩pUþǍrx8儫7TXc4AE'_;b&6ԳkeEwke7.A u&e5U $1R&D$QNhgU4ܒR5[[p[Z 3l6>:OZJap̬VJ$ _z=Umݪ--z%g?~b0V=~17xZ5}_o@/hd3K[amiCe{q~4ޏU/'1h)&6!k-MD0B KMJRU߭]+A;}&IbOB+SH >YYM&[(h)͗nپʰvbۻ=]g-+LPVI.e֭@}_gv 4z_/Uv7ShYQCy0F( s}w~v}ҿ\H;Ys%-,AKƒ^$erME_-+^#BH#92`;!NvJbe׻b5uuڑ̗f^wR^A}]n<.g%Xq.rG>ǽrT+!£!nԗam}KF~xW\y#WfVg;0>ϰcg m[ϞdIv }s Ŧ_]ʽ<khFh9{ö׈ڃhPd n|?Bg=rr|Px/3#У{h)ނriT ,S=͋[SYq܋m JR\=;n4b,1a5 T^D%К#oc6/#_]B\O?]]E! x{E?na< I߫jF0bKJR!c51WI*MhLIbD15!,Jz˷!Ͻ(͆M; b{ L&YsdIXŌeJy- b!g$Jf*TV$FNXe9EV)" ˙1mM;jdM;o`0g{ck떁!Nr9'OC:3sj&it>H4#[r"4n(VDo"T&@t *ڲ}1di+2]47>ʱ| -*MsJ(V/W&0ðlH ÓWa%P/Jg$x'< MM3O޺Rي6gglrQ<8joK3L<@a~^m;/!oRG7ϑ*NU|3LxBuS34_5ؓN~Pu5v r&[u姖I;۵O` ףjl\j6>z=N-wfoCN(6"rktXߎ7& 9Gnخ]֜.PHK)T4G^L\.wib3i5[׼((& ZNLEmk堘!ٷ+0{ћ4~U~bqoQF6rK/?@?R{ P J`(41O?U- &5@3eU^9/'e)lĤ&[80*fIvl2|vz:!>:alJM$%`NK%L"9D`b""C~ O/MNRd CݢH xk"E/.WmMn"qÈJOR8 g]XKccw) -3$$!wJ`UEk &PRS6}biuXZ=ĝ`[ciͱj5h.XAChkИM|vKs!8vN; Sr! Bu&bkhcqS`' x\cͭkriRT a n4H8\65$FHQHj ѻ0[cSKFuߤ S225r*4 5fy9Ҋbk,0uյE!|/:{90~~^|p6*^ lMkd;^ߝU+zowZ7$e3k x1吘03у:Hh0 ^@AM^L~@ȉ{@z gg?GhpuHf =|?(˵@˽/j- vkox: Ά >hD{hJ{B~;Kg˫]_s79RM0/go$#75G+bDԦN[.)o}O·1$&eNؘ8a&N b&8FYKcqͽ. aS(U!ssZ3M=$pHRV)m$ƘGhP" =&RcѩPٻ߶d CZ~/˹^(pue"D$9KJ$Z)*:okgvggN1фܦ(RYu¦T[uvNbBTSI[!0mͧ}05 <,aHb`EܝA')!61 D̐ nm)ad'vO , ŌdYiXJH,) k` Ud;O[`2.1+%6wu`aDKbK+I5Ȅ);2]0]R$[NJJT Ԣ]R[W!J ބL-dwEQ1FBR?z Q 7KrwJ? @]y=UhL+/WSF)пZC wC( FA=LИ$pь#Y0Q0)VF#ex!5$X7[<0M+%r8T "ae%+PJ*Jb$@W1l20M,LKUwם} lkmc\6d.(W3Ouui`b#,ص_N?S5Vk{A;S~s4-jثb͌0bI@&:qAL و9H%y{H s"3DR XJL`Ź*Ibs&2,KIj)|@|v$jQdHg*S3"PdH$MY88VOM2 )0DUtHŰLyF 4|Xqf- 3Ac&L1pT1=I32B`4}l~RFqX3"nK^ e7a'GzX(;W)| Pf_ө[p.oNR~[!CLSgIi1~bf8%ێй ۰yu.f2"\z8zޓ)ZTn7\10 /uq>~1!GV"Q^Ju/%|}KDߪ%:5> K\-3|'ᗯ'|@E4c/ѯpv>0\\LArRow_:>ۿ{0̿P GYͥa&OKۢ4y^I]&():pp]Kc]62?{sǻk-f-uRJɕ2ރl=J^npҞBtc:;.<+^?mglp_tȮ{3KKH%VZQPaVLE!C'E}h?/_zdٚkg/7yTeCv/ ^ՀN}~wx~ܡ(w'?>q?%%{X5Is"]CQ8x:5KejMhs/V.Xr2\[IoZ[Fƽ<>Ip kx~ӛ|^Vi!{t _{P$=M&ØvV8 ю)3iJ/+d|i^dxI2鮜V]?|pǓ8,\ܾ&[h'i<ǟᨏNoڋU47{l}ܯ,mF UeR&Pwjl3̽p^b(,N讋nM9w{t{%ZZ&-5>n!V)D<<Xw;MP2 5G %ieঋղk!h / R-ģҥ̉yd +zGӑhi37nr=σë$<xOβ!|3'Ya4ܹE/TtI¯w ϶罽zE+UZDaWRF#U1RI.?ȿb \!QD. GtX8R\ J4$ FMd%Ma9;#dD<+;^q?pmmB`f9tr oFA2XTzlNVc ~HtVT(zLǩ@LȆ]P.uvCЂfBè6P mEVT(*٦^5Y5t-!ג}k׊!ڢ55kD[0-d ԒtkX0ZQ(QW"j% (iE%]}1k@7\aĚC`+ҰWfѭХʋz*̅9%Ct1J^R\gSP(oX+\JE-<;6mpPܵ[8HV=Q#w]&Hs Ul[/G j;rG ]Xh]Dť^ 02jIf~DE u5WWgHqm6kT2ʉa. ʺnXm"]/MKv놕ӥu:nX+ULJ#E*'[tQ(4]+pFIjIRQ";̈aTrBLn<9!0M''mNnZvVSLj8S 20dy9nd  Et:Oz'y\D̤(L$AQK!NH)DI-Eb%b/n F/S4+/EXbY:)f#/'1K>#>L5j⸒V#/~/(Zc WOYWU_smT6 ?.{ =RW=\={Fmw_w&,L2j)V%Acd4\04()F6ʦ<jD$1iJ4v#hJo{ B`JOB` p+!r5]G^}*nUw[Ǧ<6]]CxHW[ȹ(ސ ƛ eӛapnO bR-a|gRP]c^7\A0PPS 6a%i¥-ޡhkC&ozmku/+֮ŮvFCyVW,LLP~q -U=:cb0U^ :9bQpA:/~3!>.lFBSt7{.+^Ǯ{lU_LFa+ f7V\ 2[_ 6<|}==М~{O\Xx V #v/)tZwVfn~W [w|罵K:Vp?qs_ŭ*W]z YAZ:~SZ+R97-/Nr'"wF}o*)8PBXM6UTr G=} zL*TUȵoIdNM"RecQGpR߷8WH|×G("Har L9e/SF"Khxs_y2#Skq/*=(No0ۇ6rP G 'Ǩ2nhtS IFv UR_Ǣ )AM3ŤPURBU(ًڅm 5 CMw*ƼʋKXAHpKE,C?g}:#RA%4j{_k OERU/JJuJ*N绊 ՠLjP1CGrwL3*jP/C,CcB9"Υu.ϹTLp4RTkqŽh۔ݟW](#*3Kٝ@<s}7j(ʪ ^MbD|P_S\kk{=sjwn=} r~WG;/̈6voȣZ/ 6Jt&LG喰3XF/S:х^7O>@^'~7~{,qAiX_U]V8ñY_> ˶/.==@0<#۳葁鱷:⪷fo{}/чI/RkB=ti8`]Ah734 0LA#(֒ɢRr`2`515:QwݠȼFPZBJCQ"Boh*iJɷ8bM$LȪ]VBm#}ηJ';P(# BA|!0[[eiԺdjzA/2M;At`X5no-j nMg\$+WM LOjLF6 aAASMUr}%r=utYZjhNu7P=FRAiԆ7CC:{QT{ToHVCB|NNx@b9H1H'm,,\ E.XfÎDM5NI|Va9H9!p]xz\2Bv Sp&"O(!hV+!O2;+!/gHZK5NưZBߠw]Pop- },B4h' 5wv}%Œ @,r:XZ6}-[$hv䖺T!`Ȓbrc{'~gB&0Nsz s!~&"O&"&͔[NDS uNt:gsVP2I-j,{WYe,{ݲ0mSl}b3Ze)%+KXif>sjekYx:oERATp|K^vmU.FgYZkq(2L{#Nxh!7WP&HfD^|C9jrOR9A'JQŷ=|dh S ֭*K||_^me|aFMBfe9H1|aCEyiX|ݹ^crZz!?Tjdn)F9PY4z~0uYm&6hdIY3œQ R oWyH:Syk޼<oaL0:)C\o}Cu7C@뛨A ~͔#y!f(#H90үLj5@ ^ìMS)tȢ ek9H1 <6m$70uZMB(4lIPTFRZc-zֈtחQrA30C QYY Rdf] yNV]г]P; UG8mZ΀CFpH;8tWt,BOͩN;G[Vrc<[>uT/гS3R/J穣w:ws,$#no`mUyNxWkkmKkVD&yWtY\ R&Xu(.x&56lhMdnue).?Oj4tONgYs#s6=Y"_Qĉղ|%tje-( %ÉťPsа8Hp T8m$jEli$"η5~YB@Mz:A940\>B@β11A\T7wVs|7zK띙eN"&?xo\c7wiM6W?]iw?m}>liWV?F+t5Ìܝzw D7]߹6 m`Gp8-jBNB۴j`HT)b<{7NIs5ȚnM0gE'2ΚSV~+d;^eDv9uD`4V套2 c>-a_W`C+кYX @]>dΨCCNϮbi^-R&V& Ʋ&kd9H9 Ċlׂ++Qן {Ab&Tb9F1poW{iN* ggIS}Rr \`MPkV)dzwl(<ͣ!CqxVb@~k`D ųEv Aaq%9XR(DpN? [}b|Sjs2X)e7dj1H9 QU 6"TTmz=6@@/qmjhmlFb@m=]a POWx3P<O@(NxSx$R(pLZ=C]3KէzCn .MYQkVb,˻jEsѬR.ܺ s`&b<W<9\482vjWL gOLE Q8 82hxw8ΤYk5̚x"̛g$hQc[ jlTc '3<1JQ<8D3)'m\M4f[J4ئfRnNtV# $1! 9m5,@Dpu0TLW܆%xVf6+K,$΅ g)Fš!Kx 1U oJ8oC:DcT4簵 H֫J_iwU 0 ڐ ŀE7 z ْ^|b٤ƳjQ=򲤳$i H?5:[:3QjuGGmS+ZcWՓJ˂o^`*WĔNVk>Myۦ| } HI{'m| H}%M6_yTc؁X͝y6n@DbvH 1xߦB8JH`u8E`WrJ{~M ^`3۟Ìp04ZxѲJ2~8MLyY*{fiĦ3qncMM RƄpv$!P{nMhIU냃ESZRKf!$uύ4 G>» MWo 1̬ըoޜF%xC̸wo+nVf a"R1d p`28r]JveR&frg[İ )g)Gl !Uul?=+h;Rf $0Q!r)[ޖk ǟ3V/݈=}uD4Ӄ ܚ=؃{bEFo:7p<ė_O}*/5? >XI.osm#V`d_N\)&jXEk>jh}&Pcエj)/ *Nm McCN'!F9Š6_ ax>7ʆWV啀=ZIĶb՘KÞ#x%sJk[{et) a"P A.;@kKj,o g 02çHJnWU޶ ݵ}ݧ:6<涽nKuiH{ĢSۗޫycM9BSД#4\P\ѼP`oWU Qkr)Gc T[``hqp,rUq2: {bI~?(<86g:-}+aU,|S:p f:?G6?j m3dA_{=O8Xc.~)̬520A#pamŌa_57~ܷ;~q;„%xmFS]O?Iw.>r)*vYaM5m 0tH{6Ե Tט`!"UOSvR' q \/O-2xԲTO _JJP,r]vRRK]U7 1՛On͌1@ֲgN[w~p'UsnuOERȻ (&+sN3[0=-אBY*j?x'jVR-8O0d ]]߭!Ԏ>8Kg7۫1M%suX5N\*8bJt(F.1iï'뢢Vϕ*ʎIr-X}ӑI/QqR7F'D$MT,'jZU$^&%I+Q_&Qc=Z\|T6{JcV555I''u-ʌSg|q˺g|<㋔V1F#.OҊţtx`podkR@sǮ[*9|JU5)_鉕37_}ݖ^uVH>ԏbBɋH+R)`?>IY%jJ#G)<;ŋX)^:i??5[lrp7E+(񽦀4Q1OACBfZ1OA1)b*L۪#ʂ&"%"Zf;JLBrɚrf'mENGD#k"TLҘ@8FL`V;G ,ĕ5GIY)Zg{V%^j⸼ sUIJ㦘3kSJkdN 3vX4Nn}8Q*.%X+gr5ťVxfX(N# )GhL"16@F{aT(QŞ8"RR_qfq2ȏZֺE֫D ,j0Jt-QCh. Uɻ\jWWE :HK_X=|,^Tdz).zU,bKɦTU,b,,XxG`I%碦3ԂԾuf <Wz‹ /d%dڻ 1{5S˝"t*~4ӻIk;˩5YLպeW\dc3iNܡtUk<'8,vu׸10p!5}hkbՏi0iO\MKNuXQ: "o!x38o1vq -L{H"leGi|"өK1Q:JjI$oӕ3oZ[*36L\Q XT% ~4WWzz|[ Yr)MHC2brq5^1 6jԺv/%Hq I yM(TD=YW99B"_zq$ᐟgi =W8B56eٟ/9F& vOe4* $< i|E}5Ҳ3Oa,@-;eaV_<@7fռLO3Љ -cd(T4dL Y0ωJ sk lPe9s66 !;eZaDV:g % Da휅^M34faͰZff,1!"vJ UTkb 9><@vRqڄP!30FYq"cN`RI82aLiQ0wc]['3tcF`13 IJ&f)KS:KcV "h <&c+NB8XB|=3QF(X`$b~TiP*It4*N2%Tg0)͂ 2oJ!`hp4;MVT&$ZKk':1\$a:O) NU)%ĚԸLrWlQFBrƉ!J#Dاu`N<-l`(3ie`[O5!0RdvIaj LO),I .EY$KS*;aZb"+X,),',0=_LiӷJp :deH8$&T JS`C1HV;E@]@',F8#BxGn)x&bր0aU`42j.a;RRX&-:;Xvnx @`*TdQNPqC4f2gwmY~Zz?A1v' `P9H,;=ͧ-J)'l6OUǹu`RysW P `p?P^Af\ZoHRpjkZ:?}RV%U0LYlc=1Ն1,Y FX#{G"p:E;v,C`\Ekt`kj9 `T b$ "` U@h>"X!CQA  R}"њ{0M<58sH&Rn@ d#hwQ 72)!q3F0̷#}  ! ,,,Zz@jkAU-j% iޣ0̣niMH Ȼͼvk\^O[J.BltM&yE]faUtJfO40jcϸ0[,w2}N^Z gW,fzÅ˝FT.RbD! uبurz0BΖ_P(/B?PG3UNNNNNNNNNNNNNNNNNNNNNNW+AI:yD GF?Q(.B?PG")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")BVc:Qt0B F~BԤuBMP}")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")B")BFwf?B_Fg?OJf{]Z!x`.;EPxnhMf]5>]z49]siwۡo aqW֒E#s_󈁀kc\|,h!QCYAbl `ѯyfb(fPCPB%F`ő`AӉ=w}`)%`JW=Ra@ \8[:&KK3Jc_`92j@eF ,Сp3@*FJ,  ؼځ.Ց`9a2`сL1>LA^SO`e@jfJ5ǃh@ VN Rd[M@p)~D7|uŧ*i|ʼn oׯo'pnRjw^Twd~6ʻ-j9|.KfۄOGg?O_uZt7Vu.Zx$7I2d*}g~Zō6{j4?^ /Nlw17׿l]UjNҺpWFz! +v#R n>,Y}O@cVF 7~ pQpӑOmt>}ZUȍn4#_us. BL r}rEH{չ8ɻ ʜ4p7Z'N7lI?O}U*BVU09l0/e{ٱJ!ŞbjcymkRq}%Uur+WНTiLfz<{Ms'|K+ =u8vƖ)b3@0m]ju%( TV!B {Ӻxw;;"b2mcN"]%*Jb+/7;}u2WOj.3*mztC})=]ZDxVoxpSCvAIv&gYiйQOz]mcu7D_`ggEZ'OA~khLüj1zN+r#V澩u7;A? S Y\uYEUo7R']Ӈ^iËplFKX7Y:Zbf[5'9p3nqs1'o/k^ܕm|?bc?vmcW>)&RcRi||ʓ6ٸbt7^N]o-i/;_f^{?O 9~(mivMNg;+tзcj:N;o#\>nkv[!\6 oumz*ReOn^/Wmi/j̷`}we͍#Gtizcv~DjNS= ^EJa43R"݄aw[~ӣ_~jWR͝Ocܿ<6LV<}I/V0^.a=JYݚ _jUCnҫ埗Qx2>Bw>kx^t;xetpk|tA _s7yG(ŸG|N(ZYݷ]Bid9km,<p O#yh4ŗHM)ϰQgW@>_ a8K{ ,;O8<BᰗۑtdT}BHI ztS'Hc@~!5d :MEHvYdjl8WsATe`MQ{!gW;58>K_oC5]ᒄ4D X0HOv Mig('2/YN@,RTj_ ;EIfX- D3 Wv*MXB Ž_s7yxY)I M,U.1\s_() ^d>NYĮxg`퐍]jxŀN?mJ?%jHӊ0-eiwTAݛl/:8G;'W`( $9T+Uyiw>p{p i,`%s|p0IƭpWQm;{ᓇxC.tB]r&gCR{fJrZx*2pA僡RrRRDަkyKyU).sT`d~d\d0ԖȂ_xf{-POSG^9@;A= d?ڵ3Rべer<~2Sz:y}(gs*&֖$ޱ,K2eXӥ0%8[(UXkDISD:9417IG;T p7L;9Xvbc/?!HLPtmB`yKa:wN ;OO=,hHuRϧz kC'gv3.}-vՄz7UbW֌jvx]u}C6pS!HŪOBCP\H4z91[Ļ*r-yDž}햲oTnnY`O~Y=s.ןӇEHê oNS z݃uymta>YUNt mûɩ巛8L͖MfZ7\7ceW@/x!]bW_q8,~~YUNUK (.Ef%[~ZryfeN sZo{ mXWxaMbEĉ.oL *ݽT[Kj^6J-&y6Z;S_zr,uRp9EbV6}~J;춏z~Œv 9>~;-V/~?|rY?eE fmGp=|4qG;|;m R fDcvo$ .l,otLLxL9ωr)|Rig>+|pyfn[@T`#.Fۨ*YW,W/+ 1+f‰(SULD; X6{ATǃ!Fy&k\ kpKg%AgQzY=yEanQ޹DAԻ@Pv9 yW ~ a xq[Ka P\ \'0Ɠ&H1]EnJ.T*WWvi9{~DPTijinQwSP]Eg6W=>lZ_nw˜%Hj; c.?s&z7 xԍ ,)Z߶(v4Pp]%m`A/DvjLvc?<("W||X" -4 QkAA9 c =&7=4ziݗfy#.2 _כT'Ȇ1FqŬb"w_UW& $'F8g8@aͣrG8EM8q\y^ ?ft".#3K#׾הXcɈ>\ p83:#yEj~ IT-c倍_xHbp\ʶdj+ʢJk%LaQ!Q]E\'vm3[FZ;`] wc9q7 ǧ*@ qS?01TP! 11iNE zp<׃n[ϙ9dB΋~sANmΤa>j? J+bVdZzU :dOeGCUVAu~% i7}6~3Dg6V؎Gܸyha11 Cg}}9Xz19{,y;8ltEn8WfPc W3$U\H!%kny)Ew\8X|1,ˈyls ϠCd K `68.oi}!*r5ʐJ/Ff<Psoh.q$WݾDƜeT7+[F+S` ˻ܦhxIDF)!۩ r!ُ`QOȻJǦO } 1'jPMN0-~o#8:\ǍPxhq:ơ1F|q=rv&YhaPz [u7 \ɂl WZSrnEA0[yWa*Yֳl=k sxYrF(r!9BrvWz9⼉BEkM?]ncXh6%hb@8/Z̙ֈYX\$!8yG#F=}ZJcK ,QܺB4t=n"*fMryn1idm4Go?},=` M E$%[-RkHhŰfհjc4[*Ul9j Ql&ne.Y=;h;۠z,/689#hCG񁄅Z=IZpmה UP4b<x?c{B4O߫u k 1Zuuy(卑&w#>[w#Ӛ=xo -3lj9 F@ݨJfVp`v5ʄ t^?VkmpGÛ-E|^:~*m`$STz *QIbdwKõl} w5ą*'؅+QjcGQO%nIV#^[tbH=^LGL/ځ=%dֺDu aczLEf7J 뮔0E |Ú7G<:w:a1ǘ S.=fw8踼3QkvD\EX`J -zEQn6F! {G,;볓) u2+*?ƒ=͖J\O&r]O"%F\T/e0j,-1 u&ڡ1{ۡq[kd?=Ό.YAp)q)!EK<2w+߫ uSIk9!.R<xB&Hu"-8a>zuT2\٢13AZpx$㴿B}I .d]!SM$+৓T%%+Sj.-G?=N[i"$NXQ.1"|8Cܮ/SP-8F\x Og %O8n,J*d( MX6]i1fEhXW׾n Ny2"Kb&#^#&V,dcހvPzCs/)5=X_e9[0PGnJ-:,ORk#`H4@P^ @?UlbD54m_as}8aoѱ9wV1_E AR~J erf(\9o|NfɬPAEFW0{8:~cF00 r%@2\Ђ` Yo*K{gc"I8㧒YhAHNXTւcC˜VEo uuMHഉH̥Jix I5m oB?fx^}+>҂cěH鱂|8$@UxFzgԹӪ |Ľco`^+ >|0|k{@O;O} zUYݗ%.cQ5uQHI+J$agb)Wm ͈֩:~*up <;(QZpxsWsb9MS8l?ԡq_rmDc t"K'$ǜUt/|J5iPg~7ϷvK%Qyf3M 2TfSvF}I$wbIK;2Wx]r ygW$yoq׍8f1{9oHn3R\W<^p+EeTq%\2*+5XNDR&-8%FݹD?NJ#Uq4& m㪏Dv7Ʒۣ~lC*7X*Y Uv6-8)7yfɇ_W^j] $ﱣlM#B8N~ع}vQ5qN-wfLj lˈMZ{U1TRUz)iLW1=uD۔LeaJWY'bÀ m#FuT lq#NPw0@Q)GTXrc闶goKHk@\8jzGCf58@CΓbI.0xqr(<0rX֎AnB|y uT2k65i ܍=1=Szy:K6 (.o![㎓Wi/էHT{R5oy1~ǺCۜY2[*M/KpHقst2́`* ֗=DR+iRU+7OW2 u fI๮=:NLWCq'Ato_ ܡd9yvyMoŁ$X Nec< ؿ*" w`9t6O<pJp#JSޯGrO'G,0`YDNt,ٴT$$^'z}W,^#.5~4H1^^l:hr7eHiWm2Y|=|)eX5 u'<έHߦiUQ&{"$RR҂ck|qG݉w&x=AȪWiofr_c? HuӘWK^sAN/c >1K1iBL xjZ[Z~f';*KYbmP_l& و״Ւ+d4q(uWbl҃XO3 @.>_dpB+3hp#X^w@Ng) Qٽ v_9Y_<-~y# Ilxb%r(ɬ)RG C%kKvor9-8>#OJk9cr~=T'̈́!6rxRǕ7WkH"5lQ_Eqp ƙZHbKbCWo6awoA_4GE/}D-#4!h +)y:Yv5 fVz@=NB1?gO*;Јe Z!dOB%sG36X05=O.KF8x.RQD[̳+҂w9\LRYz`NҞx|~i3&yk͇5%:eǙ_J@[=N󪏢= ClNlm^a8> vg#,Vd2+ Bt^!b>#ÿ{1T7|fzY- /쎮;BW/Ww9U$=0;?qK.g5q[)^yYuG Oƽ8e8a&3+z`Z_~sn:z6T%aQ03 ` X&NQn<.诠p%\ُ\`-8bk}^\Ol(_JkC]24 umJܒϮ)E)'S.; 4U&rv;#1N0A!ATUfr<; fϿڳ{dj &pQSW" ϒi/q0\w|?on;D19go'2;UD'ϭRf_lpL9qI^vTSRyN1Ҋq*ۗJaN&NWjPH |[:Qqy\ "jYZ2de!Jy$Q+%h9S,kG R' nXm8, g8=PRD!Ԃ8Ɂ%\r_yG` _zkL,LFa{{&5LÀKAg0cOP$Dt4LZpL3(y3աbEjRKAQQ8`,?Wi!@X@)9Jj~?ou5V|jʳH]q|/_WWLJy:\ݗU_']]]gx z~K|Q o@x{Uv]C‡@|_ֻMԅ0ZL->_ib_s{>wgοӜClD_g5o?͟gͬiDmȴ2T* Q>$*%2TGkyq76l?v_.]Eحw5#;ih uC4XkIA J4D@ z0)%ֻ>ʭop6xIp31 4iQݰo"6~,e\z`4!92.w|?@vO A_Bss!\ KO\ܚ!**\L, ӁxBր:'ӢI"x~b0n:x[œBn#DHS"#RZ ]%p tN>fV6ްoR66?{RՊddJk3x?j@@QUk XM*Eޓ6r+Wzx=|ؗ ~ٗdmȲ"=GnI-,V3,X'UFֻ<ϣ-EyEcj Iԙ$f^bTMnvt0p U83` ^ԏbF\ |~qQG1J(cDQFN`0Ĵ5h@%k+wInfZELJ4=Mhz дv9tuMB:~p*7kxコHM,*|1x'Ӫ4Fe>JcF93ƌ]GNxቦx&H\*_eq1&̒V״BY1W0;9kApӂq\SsвXYJi#߹xv+1_o`` ($Dd1^wj;ۀ+1Kc: k`וrҀXQ9)b^bDuB$me98bk='RH\=S\Ńik+3979 0RZ#'+L̹@2_a:%ѦVLp[Q83bZ>GZ]H(裈T.9 8"j`k+jKM "( 5:M'cp13Q;F3cjS z^VC\⡠!T){]NkbY2U-ib Hɤs*1cB>Dxg;uAҌ\BNsݥӆ&'Ԟf5nu2"Vwh a.rd@e%ѐXBAJoɇP%,csaW0Gj'һ\ߣY^띞oXeAXP+WҖIK\&.ad %-7i16ڶ9=džm>jl 0c>IϤ_[J@?A*'QQ©mAKx~4R"Pf>͍FOsrg`]]QC_QGHB8Niڜ t0ũ%-ţgK$$v*}mj.uBMJY@vwXRQVv?uyʔtٽۛxzu?Cs^]sr>WSA 0Vuhgt9LnHONdgB vQ(45|V.?yeh%ɽSs;UIKpjii+_~^O0M^˫maZl?W"Uٍ܏O|;I7i|?z?7 /a_zѻqs=Olrt5o[f)&tGxcoZ{5]9h|j8ڎkdtmw}7]/vێ +]_dY˕O`iS1~IJ!8#y?ŵJ_|JZR,s`,D1)5*glA; ߧE滿Y?Dw)ߠ\ny.뼚}DWk@o/zLw~LlQ蚓ptߠ3ڲ4i\({=D[hz~7]Z%\`6яo|?tjU|1O?n>?:^U]gYw?aL c:{pvcXo>>NǩӁiدIV`oUxM[SWU}ܵ(.߹|avυ^dqx;'w"Z e-+NE\,6QˠeeL "%tSM/!/r8VbTPL)E`̲y6jеU*Mఝc9(F0QH ʤfF$#\ 17 ;i_8 }JNSs'`<ljX5Q'?Esd8llw ?~y:ͭwa)m, t]Gc7 ;$.Kvq W?9Σ{nk)^*}"T5q6zz9o}bhEun}~ݍ*IY_.|d%JF0N Kq]S Dh؇װϢ{q=Y9)nu5gQw#).$(j<#rEQNw?kwA:UO1DFRZKvF4z-bg&*S"Q-t!Y|釸Wj05I ?J: HEdɮ#|bcT5'uY0EIrR&2Wq%wݽ4FkzIF|k2ڮCcmtu"I'"1$hI]g%:rfF7j%˞>F]"ި?VYv!˗:Aͩy*({Pۆu**M|~ZUZVBu::mbV~hVS49KqȔ,9Ty-cj2V*kvh a͙AzD ְO_O4I!o*$5ɸ +!S#8l7ޓ rM5hХ$'J:Y6\q巍@yʁ((D9TupCӇ` ֛dY]EajSN^+ƷGcZCGl`7̺>HI$wNVIj7iZ?X,tlqvRQ0f C AOVY4唐Yjk/2_}Uە-q?D|`\o~HiH2*-jy.I`$j簵ְ 夣10QF9+2ybD7h>#Lb! >_VI B+m9I{>˃${GXK'Z'"=%YRN 5gדGxzOG=Ft76,tdݤsӺuUxEeIѲv6o0zuW_|UOs6eO$PŁSEXdD&v6U zwMoQMܡ1s\]ĚvS VP۸YYߪhvЫhb>RCc2*DVinkq,om#0 lm#&UM1kUv\a*`?:?݅\KCӷKLAaIK UD Q' O۷pZا;W8@aAmK%q`s$HXm5cpWm=M!y|ޘg /x9Irm!k|oxh Fr*y&G;RGcuhtIT($\l&JT&s]RG[蕁7P1hmG9GTRW[=y 3j(L̨3ݾhAAb`1!**\$yk<0B&HuoyV*j`Ccu( |:|FYrprgWIe'9 x`dLDq%Ά0%o C9wDhM,-ӥ;K75\wz}`DiZ*xofh5Zn/p X% ARH'XYD%mR=[ׄ^U1[L؇ R߶(n.?;ŏ!SY3UQ#ّgu"imKjKJQticǸ(^r8'I.5{Vh=RRXPѲ'Ɔ XKTH"HֿJQGRTTǓ'nIw|%/3|D^ã<9,xZ+꒓AZIIz.u X+Bp'ˡEcw^ZWʝGEgtP<8)kFl _q]~4`;UNA.DWaCQrD?frƝٵG5KL7LJD&O ʙ!LqaAԉkEg\_0`?0#nH?9rKAE`y!R2TYmSwmvˌGcPk]@U½󷩙M|L̔kc65`BUn'xX[B<[6.nt5 H|w)f^bcӖ*j]sx)Q9IqPQ0 ukbu 頋ۏf%*J#6©}. C[bAc|y* Ü .x7u왵' ;ړ2yP45Zol?{Oȍ_lo@cv1&XLdl:",)RA>V(I%!bx||bV9B}Z廗.Ժ?^7U(3ʚ 5|,O^iܹv]82͟TYqVH Ia;m9wZQTyxϕ~WSz&8Xʊ kE#(M4bQF8VXâVU42[iP* QQ?_Ӻ"u)#R-~­e! 賻עcHE \Ej~.kDVVq붲VLkV>{ÌDjy:tdr|LQ5X݄:礔Q9=vM٢uY=у$2DZx/u&iQC7'E3'Og9A֔wUSiYjҦI{'VeneO}֝nF0/kh5H#юx2ak^M&!}zX-Z8/lM/tvozނ+75OṟHZp styN7휸QNaH[ޡaK\0w!t ~njklR?FUXJӓq?ƝE׆$Tbӎo<$ǩ(# F J`4afKyz.ы0!xW-0D9XkÚ0y;>^{Ď$8HBjFz Db֝1Q~]a]f[:|/2wwк5"`a7&R#(1* 2B`9tzƈhi+NAwIJ3;' Xar۩~w8/V~K.DgW߷Xvv>5og63w%u21&H(sN\d^8Қ_ȽY^o]^*q-ˑ)sn-(6Hc ,)Zuzi؍Usq,FiQE%yqҐRv[Sp¨?~/Z<`48:+oGYitKNOqO\>[:ۿ5~/Wo?M3K `Ɛv+ڞ@6,(;eXVJ ] 1'# b[ Q:s6%uc]S͘'Yig>K X[:JksD&N[|JٺJa :V`dsq:Yp4%=1oUC :w'L jr"jQ݆bx[Z1 8qQ,p M_'nj.~ti`7!1`6^`}qoѺ-#P+ph 4-6 MR€ lE30U@\P,96@FujAr++g >_k+>1!]8ځĐWMr0 >(D9< ;:~ Fdv)9`')hR-lcl1:lM?MLfr;#M֝+Bч2vjy ݱHݖcۘBtrN,^h|=^Tv= l>7MT֝{O?JϫY$9M+HX>ahݟ!nc`~b`g&4WӴ۰f} KuT2մ"0sO٧-uV 7cFP2m x>8SN z_QbJ㺊rʲTfRy9OW1ㅞBjAD8]:x2#@)k:9\5\ Nje)N#Nq#oa-f-KXfB% h8o:";N_:[،c͘{ .2o(+Ww~rQ+ޛK\IJ7|2x:  /{8>\eA&Ooc|ϧO ߜMawy}ב3tݰP)\ʿNj7Oa)' rCf4ݾ&E^G8 #@=RvA#3x~ڋ^||of4φ=4VߤFa~;xBp3v#e5l>IợXx)N`]]SW7P5P5`9x=I3d=5ᦣJa&:O^2Vk'KY)4X:OUvb2e5Ջ0y>yp#~ a6s8NQ-rே+&AsO~<ȡ]_|:q0\PAx8|%n5ZE,ߘ%&>C& hl^kȋW:V& ri}k3X\̿d Mu; G߮:~4x!UQuM.W Zg s<*7.l>|?T_ ƮH*+xt4tU:=`lF`tg, S3_'oo'A1>薡r} 4C/.navmO7ט odSu"7YJ 7%5wh ٷlE-3AF} VQXs+زU5Y6C`0@a"JduP,h$!vAg]˄1cI$d$SB'a)uD) ]b5$_+IJ[1p8;Gf=!6 $6R?kPI"spHS|F9b2iS#D:8ƌƍi|p8_w(NH޸aW2?l@5&QeJ`py)M)ch&S}Qը>&֯=+4fO'fWfbQ*#7g@]UZ>UK`aG`e7uRSFD0AX8>eLgߣFZyjj>O{^kVM8oF[;ߜMnJ}촲2a̼ aN!48RTsce¨ɬwQ5A45)əoA%8`Mcc"clAAY]"٫. sC{Q?KLA[/QFᄻPR2k݉ B( Upc_M>\}On?>dYb?L&H~#̿nwߏu-MYX3U7HL-eՑ0\u:+ ^u-xPp \@-:ngt?߶@^wqh4(;YYX,|;(ۡ lnXW 7%b8.U}Es.힏9s [+)E hH\؃̨`_J9zY1DlC ->UHbR~=c(r"xA.W ">16UA3YJ1)UI\Gp*eK9*eԠ/3jMj\"<糯J=X5TK7ES-nǦjeK_HM?t" [G0TxvS]#*n^qі"/>StvzM/Gj묬Sf](Qb^7*Itd/KkD[cʟ=clѭ/hBVT`!W!uzJn% B)¥P]jA ;ƹ%RvBl53!5Ŧi4̣2"US^K\]j5sV *Ѫ|dmY^C݂$yàv|Πט]^R₢YDo߿x,JgkҢ+L,(Ӟ1ިJ-?po8oW3̙p߬Y,=0s4ጯV@hrᮺ>kKڢ1 lVg ݢ}Bh4x7Mr\ cE"=l΂!ƴ-/+V3aœ,p-ْL^Q<.Q,orN̂kæυҪٗkhs~'dWbk"wHv3th d|엝 [,k-eْ,^jw4x-CXU:ܒAp4,W iz!*?Z|uFlun>y`e.<_Zj82Q]&v]|-t?m$%K>)v]~nVZX*Z|P͘ȆZ ٯٗQLN vq_}l5!RW0Fl*M~r@ZgfEGR/)+q+)ռZ'pl#W؛ ;\|Lhys91f' ghvXM5zuNJ/h7(]T'u AqsspdR|!h^\dzW Ekɯx(U%gw =~%cz3}:X"eE0`>8E!\Y0&VMHhǽjH˶4SzvS%oj=f&J;řI1uk;%,F"5XC`) tcNMDO)⣱xxi[qP-K P1J!8tl[x?Ld>;k+x/,'&x鸖Y5Lƃ$a~Ph-f2adn< ur.<8b;7AOWCxzm>wߑJCpdlRTH188.uX s`n"8S 95j3` HdKu9|SapO?ݤ!did9T q!Iu1l[xeD̦w-$$)W-zY:Q5AR)pH=КdK˨mi. ^ےeBz[Qϰ/ \ نk Tp)2>Z1PĹL(`}$#HP&Am /uT`!ap&: 1ujx*1LVHERP-eU4hVB0,bXQR0ޡGb+`w8«kh!e8랧.tox_qLa Z0C RcEd}]@*Kz 9=~{X8К^!@Zk"'ύhSpMwN`Qٵ\P\0&1]X]԰!f]KÍ8v& 1,In/uT3Bόq\ M]\4B8^ߴҁ[b00j4wј%2 xNb{3>>vv"1&o,E֥Bj8!  R I!ځ28+vTUׅt\蹨 toa>6.A˳+=ǸHSXMp~Qy4wHbwK× _<W2VМ<~\ti@6c[; ?7?N?6%dO믾bW,y~?}Z8ޯ=;֥E e'!0H}A djG3 a/uXvmIJgF8$xoYa}|9څn&H 鼐{,?&0_AbYqz=lLh0x%B1g??up͟Bcōa k;4_t16`ۓvImǷ;*{SVxQ~R cŪKvſ;T=]uX1q<9Ie|3&b$ZgNHCE'abx^aFa'MQX"K #CH$Ħ`gXӒg'> FJl)@6:%P{ b V^Zcu4J6VpAqC(wh^4rY 0ChH|maqU#kD2>,զ Ϋ6Vo_ϐ,=:*ppN̦Y~0؜_5C4E@Xa[)QXoEH~XY^RRÌ76)GGepw_p v_'9.t<XA( cjU;-Oyc)v!COG g.z?U][!0A50ֽCT]uξKbB)b,[l>٫f>ˆ6VnD&r0X)) :r&4\qp^y)gN$p*!yNxC!ae*o 3MiOrÌeDqа`N41"RzkX7(--ͺ8`fu[|eHTCo;XXnƦKF3$!lۖ,uTU;= Z먌Ū^tn_1|^#"Q C@<@l2Z 9SI8-FYؚ,뷄PSh3ʷ&YB+#)f31q!KV Kי h[h9ۇ7TUJ8Y9 }Y>}o)s<#A;e!r*("1XB5V}} SΟf2m; {FtL!!,CF^.llzVuǶST(}6}=:*S% &6^(sqn 㴲hGHYԱ'Y+{CKAVa1Ǜ%i^ |EHH&cE L6=\ Wݫ5d5Q"NaHZi4ݤMQbAM4^m4$) OBF4Y 5uOΓ*ѻY>ZM!kjlcD6r*p}]Y}ae% {3 X:(v8p fN0SE65^9-[:*C;]3'߫|p=:*CeоѺG*9`$H76좖1؀GGepNѳSgڏeYE+=~Q-숯VD2c32$3pF0:p e x{8aZUXk \@a2uAWZ*Uc3% E S4!s  e]`bh˭'@6ŭY;vB3G\6R^w8GÀ l'9Y9RGep ƑUZwUژvdx{Hi-N<Fh\s,]%2 i>\г)P}=xvI;kA]^ef8W$#ԁj$,GVp4^tq=2a3au\Q'Y+8`*ϯz'#oLkpWY4{tT?{ϢƑ `9Yqc^,p،)e;gHj$Ґ3CY7 M/M\4̡g`U8f롧[eJ˔\ Aq^!@T1b8 #\im[=Nā>C:Otwǫ{DN`L:վæÑ,|SK-ش!1JٺMKWn=diުx=;zhs,Yq+Ts>Pju~mS m Z5(=Vۂą>EZrLj [CƧd|T<8‘K鹏ʯ9r\MϘj'|]-F pJ,bJgG |}cVIS>N-*r=/FDGYBъ!.`TO{?{lR8fN⧶5HLP $ ;Lv9 k ӪaCz89]+ ;C{R ~0i99N+kn`:ntT>޺w\(Vx]B}29a 5ځc`{ZLP@U8f쿪ZzjiCjVb1`|<}߿D" 'vUZTp0/mUxHsbziS[zn]>0CX 00ƻ@ GE[]qT5yߖ-Xu;tj`n i-w9<\~;[<'<:mD)^0 3:bZaisS2GQr 縂u#y:,rH8 㰻\+;ո }nBiW=¾ێcg;B_TYːjϵXs{}w"A6зuԳ;v]\.Ff-؏l9+oP]f(Ogx>'ĤON ȹ-5_OjV-*7qHLa%>8)"-9{2peg ҿ-b$^iSbx'_z4/fhçW-ʧiia.EMa+]0e M|Q\|5!LwG ;]T?NߏB"DƳwDSB)!T[~oV!tҾ+kew>l$/y{ӵQEٳi 3f'YU4<<[9.N& ìxW;.Ƹt `_> yX ޖ|q~vF:973-'/kVVyv ?TC  EB={=v_*e8%yJx:,iG$ϩ`[:8}5Y`f| [0]m݉qqdP0Tq*! 2cqr &ZN`EwT ӕ!3]qSD7g@b^A%KMsXt ipN4w>(qCA+N=df<]VhR܅I>ՎP"ǵ9LF<Ķɸ3?ҁMw~c}hdہOpM|g), I֞d u(*o) f};p\NR\&`Q,Nq?6 }w0Z3'Ljʁ_ k02soƱiHeˋyh.4&E(2K0LYf7Es9 X^U l?diEXlBrI,G ǨNf%@ϊt0=[%*bN'i~ $e}j#BӀU9P~>~xŲ݆FۨdiQFLjE,XT#Uz"h[c(*xm8m'TYٓ:}* _T["%3Aa]Um^nS3yÖ kxIgq6WtU~ʈf5ՓFӬv7岘.9 Ȟ>.Ȳ^y1w|ec)O>pv4ffb3Qlç^ Ee~7x^&/X#>{nPa1WFI4MO]ZBNeh9Ϊ bq}ܲXTCA ˽#5qm:9|: -7+ϐYf.h'#2$eRHs6u]&Tp1s E+Qd ϳDg>NgWByVod|[*ƿά|H=q`7zIo~vܫ\xSkMkT CP7M i2 3ÉFG3I::J9QʱRu̱6NĩOeZR9\laZS`7p!vHMXNWDNWK<3T/1fh=6KsSKVq\(JF)W2JQ3WR4AR_k8BSʔgt1xɘ9 BbZQ'-p4 rb"`#` ` +Nҭt`f{,uo_~5 XGdMm“i_$keõdz|F115=Up( !/ơX6Rm(6RmӨYt͡! cq1Ή2zc͈ʋ@/RJc/E@ޡ舨Cy҇͢uI(RI(%=ׂ<J!w&-gV(A}Nc\.16b>ꈨ[0yJ~9c]}7JwT}7Jwf]S:rДQ8b:IZ?awBpcaZ:"$]DXW~D= X5/ݣOexig_GFhhͣ<ǸԧNq*ek AUޣo{3oҨښ!SO>2D~)cTU-z07u\z$†$ jd @1xN{iŸbH+OyչNmaVd{hC5.tH`3f2_g?~G/1ц|:{=%(q2 NRIvk:+e|4xWeNTataSg)#yvMZO_~a>ېB{CfٕĶ{/yvknA#ob=} ֬<  R:ܞGtejf|)ڜe6˳z3(#.mZ !+lqŪhAtW,z\1&z+}𱪟<""I=6 gPNuSFM7UV]j5H -7 7|<ChrOC֡>>L:G E8%8:] aߡEFr9|7& ;'/)8 cuNN[Zu+֗ysh'̈́Z6 :Sҧ$o֏?f.Gp6\'Eiv>3;-JY{0}U+`H+?њ9ɕ1bAh2bcl<\80HPE1=2Z :71v4a˫,mr(m~G%q Zj ϡ}%}rE= CyNHLUJVblP[juX?nW}]_׶WdmE$)ă}&iOnz&jmҕh5:VƃwC{kk~Q<9LڻsmLW'q!*wy͕r-٬DޞInȾ> Ս=zW9Jf=YYܶE'sb _ػ֍_M3 '^U΋8EIMԇ)֯%wk9wqhzҗ=wwCy&Ƹѓq5ɶWd\{M탣7=-ylV b홃ю?h܈cLm,S}Y&M ,Tp1emkv˴kpmQQ hp},EQiCw^F)巢(~D* XGcp|,*v4_ps{JL"tm:|gbwreW/fa\T>9:@@@ez+Hݦ2't;[N-@8[>@<%M9bQQ!.(GNp$Pf=ķ o9ޕ{ QnKwuV@}}P\0&q-9e 6ľ>ly+-ӵv< -HKV > ',݃ d0&PYo}$UG|Kҭ"InUSA+r5,Z+TPYQaVCd5+ 1*3DCoxxƼ5t3̈*R= Anpa t*Q9#Ԏ!w=\578j9ѿÄgZ(Vi%NJX;† l-/ ={zA R Q ^W&J@(W;V@xyxT< Bՠí`Pnob7y.hk,'V(#}aλ8j%ct2x}W0 \s^&ͬ2-4&bdE|C:04A\^] BPG=L| 6^t@o*bVX:t!w]E#c oղ)^r.u= Af@ByKH+E:pYt}/^p4ÂB WM;SVlm ^X=' Ι#I VbRvi`Cyhk2%dЂyQL44;6;0?84U&kM`d1iºZW[LCXK){@KעlCoH l5%()ƪ M(ulsaw9 wDLBSYimҾ#aC׺3WZ+!ܲf\#cH0^DXci B3]fX%ƶj(K@x&jEAv5mO6B!7:5x%RB=$!GNYʀХ Z?o;jf.BXLl-ڑ0NHJ[Zti,K"ZP9̸NkKCW' jIM߽djRVmYvL`|-لBu#w:?1omES!*,DN%:><56qk$(K™Rb+^sNN4FZN>;Wt>Z {'7ckJ,ȞfE^*:LN> {z8 g-v6{V LA{`!ĪQ$5`/ BlBbb`5溨=߽7 @:y5{K "MVYkcp'Rdr6ds6֖|#Vպ0FÀ8x)ah%EmS-*s9e%AV\|:V{7y?a^Kjt3KliĜ}jBJR2e#T^3MK$.pڃ'0OiScKkkK6X:}%fA'DAҨK2[{rq;?=f1LKJFfY-F/*sSHˋTS܇FZcBFP t2H@ZT)5xVm0M{R AR_|%ZIXšR?C)u(/UtG[(&#ʼn .i*aVR6S\ƪҎ50Q7>yk%HTkޓ^|Z:PJCv1"MV'W7ھaȜ/zYt~ ~?`iڪ]LVXu8=a}+,ͧml|AW}úgIjg<[܆$VwUqla|vuX+.*+dgR jCkq]!|֒"E]>*.(-m[-sF«*<2i8[zV|rIB~okc:;7^tFyXkd__+|p55#۞w˳-U/d9^uyr/dݠ|MGwwNkn,a6O|11x9ZPyd ݁*m_LN5rqr 1+f3u;N/"7x!l.ExEI+iMiDе0O"f9cXlm^A\@E~ROZlX~j;FBXBHB֣nXK'ՠ^.7im{}oȏ! x7$׽G k9&nq<<ΦӷG+2_YE RA rΖDH8 N+y?X~)BЎ#26ӣwGGgqqTFuyt(,VwwbSøgR/ܩZS?ϭ3 m̡t&t>BָǾ1ޘ٘AGUnJ)Y?WR$Og0f#7 Rr1W؛㭈dc31O߼!gWXMC8_I*hA?n׫}vL~ןVZ-_\x.[,|4[^{tտU~fM~|EǨ0;YoQc(V}wSꍝ\.nIFFR-x̦FtmO*~;4Y1O]]ͬ5:Y)`. =-YkC rCͣԍ ?Ɂ6tz36|.k,{tKncG|B 5ߍf G'", g]_;m"넎aE̛c>W0.7*v (љi4ae4AQ1n Rb-&~A]yoG*]`[0`G8G!X|29߃_F^M8[@%s&bbIb.߼3&H e0"N:ق70ʟ5 X~]ƹ*kB!]YևYṼ#~b:t \H:RҀ|7__5<DhC<,9}ck{V 9Y>0|NZ x2Z,x ,(6jXrS6f[<2.lb&2&gƀ.K}8'ɹ{ZDSCT%'eB{dkvғ×GaK@o=i1;zk/ĠL~e^qC}?'2 #ƘؐP>jSse[ NܼZc`6&Gc'y;9Wm6@vvmց3vuz7!9x=7MAjǬGI98i02sʷ洶N լ\ ?ui+~-W+^K+ymo9`sKKT6bJB2Q ņifu sfApep[)\R899>AmH1K3c1lA/Vk4 FZ8Γ p%/Hϕ"'Q$³9 JTP}Tԭf mLp+atL`RcX3<)1ZZsq3 ,{fM}}-?r6ahX'6[w2Si޼KIY ؿC}j"+Ҝrģ2FQ!%Mڶ^?JNjG,yٮa__1#=UheVovP՗?Ϭ igi%62nu4[mMr=)ZKf Ǎ$Z)cNЇ8gfNwo? xJ١2r`I%ɚ4_ C~s1d| ~U>p}M]lꗜv4\{hh+Qٚ. |QLb#˽t\ >+ɅLӷWWoJ _'..MC`˲ɣG Q.tYZi"pof i7כoS6frVuPn*Ƿo蔢d8̨@u٥H:@p:TA.; _WO#gKF0#Kιh<5W%w%) dkMr+ZK7:]ai%:kO,swb'GjV<H% }P[zHH\i0qgI!`x%r9ʦDK-`j]o\/( uoS-dlط ?hrhD[pL1w\e7X.1L"p@܆,Qq5A$-|r.:VKuSyxuV&)Pwnhzots%Yf\;_q o:zGg0 )T4Qa9 NYnj “gm͍O hy#۫yߪ~{N$,jc@s^L4WnB]CJ?{fZ'->Z*2YHZQҸ^B']w[u(DzA3"e}%eLk˂oaZUy$+yfd DkՎDx8Dnc--c,5\W7ˢn;{= Y,B*5^3/޵ Of^ h@hTR?_Rf;`?Qyyj#`; F`2k7? =r|;k0"Tu3z'0ף^~|}bmy$]`k-grpvޫXcbη67]k^B6iUru< 3Lc .f/%Tp&QxA&ťN)f sI9Z nGjՒ@NJ&JF`s*j Wb1 soiU+{&Yk\Pz(,b#coG R'[_3N{*`~O%N,cw~.߽\M~|w~p{; e}>_7<_6}75Wu 6]&g=YmMN߿/k|u(D 8 Pg>{jCGkuCeXlNAJ*°(< 1)*Z*AE#/eibCoP8Fq|{Dݷji& #xKzr Hbޣ9=5 *Kj5׆Z-M:Zma@Y0dgy2^WOg8Ԉ8@Q@ ~d##?f%G9M݉}6HAhL`M1xF'-j3d. =̨!(o˖CnEKGi{ze7q|OZ>0kOOzMsƸ׫b VT4tVhc"DwouzswM j._@B†k<bD1'dec\!pETtsAhWqڃH4,r !hD61V;E։F+,9qQ5jYgCKh;Sb|tM&*oΎ-#8$Fq<;8H\v75Ȗ舜aL8uoƙ|F`]^_\6{#՚!nC: ǎ=2I[#F#2v:k'Fl%s ]N6Vo-[W2~؀ա+?ls!%d2a*?lW%cJƤdW륃|S~D$@SLڀSPDc! qM2:Ekv쑦AġҼOkI'D: ITN;g EHIﭴ19xQԱiϩ".DXPq*I-jXzM4hl8[_:l6_K檛-nV+s*lh:&V9aAjS,i-ߵZ''GYfԯJuzhsE0mژI/ݵxGw=96ϳ,rJ$uNP3zKS9/2{ Popq⒦QٽKg8 3KB?E>+}gJ} ty?"^׭zn}}Z~w@fN|Q6]wKƅAreubkm#G_v]:|? L&{Lv03%G-;,_%KZR-; ~iȮǯEV~%0]nuohAŚ)ʵp[h52%sd 5Ϻz = j7߰LTlZXz[ՅɦtQ_eR@h+vL7nvPO{-OZZtIr_szJ*/i< ut+Z=8Ԙ-55-wh6*7fw31dpшj41I~4@n0]۲@M=?&3hަǜRl˪lr@3s Xuռ-TZ0֙.InE 8Nx; tr f}xK 2ȓ 9Ěc$62c3%(YI~@ߧƭ g,g8޽pBf-u^W=>ReCxls!c%NrF.Q `t ip";}sߟ ߔ|=J^_/k`Z3țK@;zeX_c̳s_4^W1HTc411 s[ < X>K⓳Y#TΒ׮lta$ w[  ༨Tx;ZfN*B9O,WnXi#"E ~$sE`ZEpJa+r6h|'}z/h=u7V/~py8(,q0}(fQF^`RyɝAQTlI,#72W p}@oe%!N `1%@gQ0j/ւօ$ɣ e'$cFP$aAq {(g)W*=uw!c-!^zI%m%lB!ENTFQ:Ǣ޼r+s-(BJwWI7 xy1]G vK0wQEup+0b9ĭؐPζ)xʹ.Fɸ4[@y=mWNn==r8i8 kmCCT!-%BZgfvy^tZm HjsZdbɋk7`_S11I )G<*,c%RT{m!x}I;:sN[]Il"1XR$ΥLID3̈cDXJj7xY$"؊{gd˓.֜[Vh9ӓ~6Wʯ/?: !2+! TY+CH(XE"qDD`&S k#!0FƭFxI=)ZKfAǶc g=9Բtg'QM\#wj{K1~w>ҝn B|$ލ\q}QnŹFX8% e3,R TrV%r.Wҳe{Ưx;|zE[FxJB˾\{:j V/|Y2 Oó2J74j^Ż_VaۢyK̛ 0{Q8=Zw?>3*uSFd~AX\'sRW_^ gëW=y jrǧ`L2a jDv, jOcfTZ#lO.|:p\A_L:(>ŠyceKɜɝp!&*U͟_!on:FOc˛TɸP}Q]{;\x۬G%y֘8Kd,=M3d.٪I51`V_oӗ)(X߅F9˂7-qo~͐nʹxv,pV8?/W+N+jҎi~lMa&ACe iҕM{Ț/ok{q8bu2i+A[~9M'gA%w%) dkMr+PHy tuOCx8:HүOؘq}ni<|dGZLizwB/r7bOs#Lx8! IFLkbFx5\}R;Lp*NB818…G'꜂TIYZglY 4Ɏ5K#&Co*u :ob>LƲ<[{jwFzij! F `v;(9g(vKvŀ"zR uGVkdJ6{'qdciJ^;d̬ OscPG*hwut MRf,gN};LD I;Fɝe" BZBd ,\[VbxUC,_osm.Zz`|]n _U0QBgRYޖ x< R`fSYp$0543O _朘 ܼH,y4U܃s xF+BS-BNԽ kb(`8j&{'pU\G*ilC -Fs{1^gR$*ʾWiӳ! &h \OZoL& x>It$]AsP,ͩi]s~uOV^OO^/0UWĜ/QOejmEjϧUZ;/0^rFu$.Vt6 mFa=bq`*fK> /g ]^98LM>rT׏:dۨsUBë3 `'>w}^Ɯ#/)nN{`5N-cϿϕ.gzx_钅T; EIk4֩׃M7kM|T%pqg' $Go^M?~ѫ7G}+2>`@In;pc~CS0lh ͻuیr˸> R!0@[\߾v6ˋ>Zrc 8㬇N2<n.Qw; Yî#gXk&2yk~b˓m;uA`s FN8iKD$B5^rM QUwtd Q%c<r9Ӝ&'|Py%VpcRT %B.ET /eibC7:ل*j60O,Fa5h#yVԞgYw[M:'\}[I)ɥbb'AULHXj ǨER?E}OG.*U<3-BΗvNO9# $/Ip0Hn0 cbqAw‸},2$䊢\SCLP^ႁ&rD9p9v" z]8ipW3*b5/6AGg6]^BKW޿y Fq p.rA$R@ (J1|W+qe@0*]đQIf*Dɳ˃CQ0k184x!I &KQn5IvFΖ{;M'Ge۲;9 7IhgQWP縜<|V,eb7nV}flؘ ]7VxgӪOP6϶&r-iUP w{ 1^!u[a9GNm6Yu-D nU͝7nBocoot}v՜yȶKzS 7_vWlak19 -ԣ6~SV]z|sC\Eu䲾Y[e"'In$<ӈ+bweIzӝ@c1k$7IodE(IbUdf_dDF82peEX{'wIR1H`WK`YRK, < X2^P Nn&V; Dx㉎F&("MiFPd^Nz_@vqǥd-VCnAzFn Sw=L}W(kALWOW)FB^-]R[RH1%( 'A yJE._lJl>Y+\GZ/c&驷]8 XY]uשuĶ{zN#yƷ[G;֬oeo=J)s?FFzΝ5\yV^Pfc㧏>]>v} 7~氕PXo\-M7KK-Lj1NE$W&as[#!GfSs6|cZ"} =uD*-Z mn1yU?`;ux@1^ZU/=@).=ozCفWœXkݘljfO-8`hYBX_)FyS|~=h YjlC]`{ ~T:.aEIbXY9Y4!j#Ɛ\FFuh j9 v@ѶcM.߫?Tg]Xti!`D2׊Bb`a[n]^_I8&uwp[yFE-[5bO;ซ@Ǝ+w/쟾` @ ERAa%AѸAإM:Xb2l7SV1}wXlt1Gv]ǖ|EBwJR+VfAnq|F{<-ou\fYE>^34 ~M ji?_#~+Rppr_ (cهi<\%q~è5GܶAT6%ӴLb쪯 \1ɟ[&g2Uջ:M' _p!'=E[YuI@ͣsS7?[W'y1-oڬpᄋ4Qp/8#`70Qu}EGӄg*Օ6JӒڶ`)V_O׿eso;VnkK%K5;X"A0SڲoHb2:XłZ\\+5҄.Q'Jt' M)0N\dɋI[ jêq3dZ}:] |sZUExs[|KKIMhMUN̟s\Im82>%+p9Aynof~~ aI8)#0$tʱz%:rbT\!9qjsS2hp:II)G22H1t2@EI97\ZMtXS%-L$h|0OFqD`D&ӌ{;vCKvk,'}G,q"k.fbv~3U𴑆|&SU>?:;Q V◯=ԄN7a0r fV@?V--*4ӕ5I.QDeʩcAJδTqxLhMVnQ`9VHt uvhe֔7^)pw.8^YOP} <- o3?7|};ޗi{Z}{m>z^$&k_EK%]M/7g)7G`{Y*7ڃֿ-xbwVs/!%sd sų%t\=`BBq1#^~h6tWUdS'Ұ&m۝Bֶ2<pksr+[སۯk]ftvdv{&f\xsd"Ӆ;y-L[o3LRӳma:uЊ}Mx7Y&zV"ǡڡ}O杲PiXg"J}틾kmh~(/ EIW <|;goT+̱P(<3A9vAbj#3V9c\"HM~4sbr%&|MPQsm{>h5,ܞ}} 1,mo0l:{]uֽW<ܫqu߰4 A7FʥRtМ.טsۑo}[bTAlan1|A;>kwxuzFybnVmRO='W1#"Q'o˧#߃Qj=9k5I,Xu/z3&.ᓭl\끗JHXRY !YT L!b/iK@ pD<-!a/{eɘc8@(8#&IEr QqI{┫\mKZ/iǔ#Ŏ$:1RQ;'uLj_:dE0'JaUO+~13Z_Jtbv3_at ʈ{ +0$^%e<z d1eENF0Ƣm-&4|yGlݹƱwNIm<>:3G?ySć#qS%,F"5Cbu:㦇D{CғWhB#,v& 뙱1$Z$;$ɹ{ZDSUuzMiY2SrV_7qa8+Osx.ϠCҩ &t#K_/цLրUF{?+ͷpco`pFpՉɝ7=;gՔCE JȤɗO1Cr)B1ԡ$(\C?z= ^KB8!ґ@g)@C i笖;I+筴19x&Ѻ'y!{OkX]tuR;ɵDŽ8c#EZeT1sZ  D >GN뭳@]ûY {&45VYʸbKw=jr)­_dqK? 8G_DKeF15ʖ)wZ_Fb-#-~u㺄8ѲH8F8vFy0:9%Wێ}H4H`J]kn?G㧏U^Imc֣BPg9wpT^hP {(JW0t0 џih /mO}vq8s~1.-Q)ڈ)P(% PG')̙M.ou(҅z&pv>s:uSmœȅ!rb00LSt-_jN#-sau KTO炉( 'A\':uټnju{et 8Fɕ0:z&|BI ,pn ̙CN\sX vqv7'3mIϐ̔=|K79c>zp\vi[w,|N߀6DgW7`wGMcgLHs#EI4՞kc>h?ӌ(Q&GsbBPLbZ%s2fLj$3o>}#îD[Ac =ٻML#ϗnCd6%Se !QH.bmvViFyo/5) %62nu4[mM$N(^2 B<%9%x-z 7}3B:"~JOhg09X٨{O%)7_#pݾ\]g&Mwdr:C'L9 &qYQWMܓ$5CR1b? d?dzwgg2}o>~?{GcH$4yiko5t MMM߿/kxU(EF  OjYV#nu:Ҿb^1[S ,F0r9 |pxҨ{^1}@l\j4tV+ y0]H߶M Pa'NvVlz]6QCE BZ$(( D "Z {nÞ4Y*>+VBE9q"Ik."J$<y{V1%vvħ[ T' `T<G'ca|9T%T_F쏇ynd[n3@]P:[s 7hp&:v4WE&Cڲ͌ m]xM7WoyJ{Q7/ŵ@h}ӹK=:A2OƨtOO7=blnl >YTufMv~*-=~= MN&k5͜O@U%owUFc4ß>Gn<3ϩ~WSyknnN7gffPG.-8jOQ) {D@S\W`F\er%UrV+^bB0H\!(F\erшL-S.2\+.6$` US*+ȱL.2WQ\0LPG#2RBj9xqWQ\i|ZaJS\l? LPR¤ 8ՄT0"*o %׆rܡ:y-)J = R HaQV@^J hQs<S ĩv!ă2L,h5T"=&bvk 7/ e73;=dYʠjwvҼ'|P*g X\"H2{*ʹ1[+uDИ 9Jj$j+{<4ԧ냏՞'B@Yu[`-~Q̘]wYS ΁@apTn=E9L[-_Ϟw˙+8e̖1ǘKuˡ5|ugj♙bmWZ^ܘjWNJ"Q[jӰ&pes"mq77@;$)do%v铖]IrP^,78ӔfbF /vLB}q?Si1b;t\+[gV*43Wm mzV-fdT س[6,T۬3M}m^^ː'cކ ' F__v|oVP@QDH*N 0DDAkm , R`1B&-Y5(QSkfi&g φUj բ{.7  k{tԽH,ּK\cIIOm ֖bZ㉸,=ڽrymLzh#9P.e 빱1$R$; O5Zks 3%Qu~ .UN3EU#-Jb7>d~g KNg8ɥ.Wܨ{we_i gPfy7ۚjvUxߒ[@oO@O΀^>N163KY8k07asI;{ a6T^[f:^ bͰewȰy?z]g=6ͶEZ{BZR3U%P<˓y6n)~~0 ϙJڔ+ Mwo~+M©⥟`x$;'9fF.g_iJ[jpiT%(L^p(/K>+rf z Ou8gUZ=꟝+x&mLmkqٓEKJagg[8e|/pɃ0yMd@Lr҉"RHxb ')#K+`Z%X$NfL Y CX"RU7)2[Іe]z{Uڈ`E6_Usz%apϚr6GbhmN2ydz .QR" sKqsȒvխa[KWP_O)1^lY7+KAmζn zc :flso\ȂsE55 5^JJF~v\?[w:f%ͺBBPB­dɕVs&te!Oq#et@M:")L!36x5Gt9-vrt\Ύc8ϒNEo+b%{n*jT%2?Ff ^=N&g|d:4q%𸏇YSta"K6EAf69(IQlzz?/=tRqR1 T}!P1ӓOLdB YLC\" L7.%0#MK]$OP'sQ!'=Z&˽Td%zǸ%$SM7Lt,tb|a]s଩;.w=u:?;@S2}wzQx7z wR7ԐO܄Ē*1o|R)Ԗ?M*̲!2V*aQE( Q=%wvE}?`y =m?ȅLm䶋*6w_i*#O]%(T!y ]DHZeE3 }? i#yIrFh2KMαdbjS9[寠svr~^~|=XM5V]zUL]0ˊU\PP,[)vds#,g Lwj<hL4]sA:'z̐!EJ}.XggDzº]c eɉyD$ 9ƀlNRJ$9°+Q*+A؞3%I#|{`TY' {H%mFΖ% 94¾畒 o2mۥqM>u[}ZVzp]niLKͷpJ .Im Axţ’.- H$d4|V/bP) 0FXaګ^a<5CjHܼQ TEys{Wǥꡕu+ Rs&(NRl1%K1 Xm} ٧qܹrM(RPxߒK%#WRѤ]1Yk C)WxHU\)ҰԷE^>PԺ}k]< ?/2.j_=ǝ|p~tT Ds5X3"oh򡽵%<Ͱ$?y,60;\_s0ѿ?>O,K-y?|F:$7+d.-b3.;e{oxQqf$ڛc̟-32|6%W0urdA<>s\pljO9h?ѩo ɜKOZK̎?j\RV0ڋ[?=$v1i.$OP2oF+suk?_'ߜ?xy!FRĜGr|tK^JA?[|_s17RwyڑHo?\PĔǓFՓOǟ-<:9:nr*.&7j\njGՊF^HXA|6 jF9IWX۩[ v+Ofq?ֽmN◶$4J_GIwwl]|f];4k!Mh=ţC(ɟ_~7/^Ox/g9}$pvk~|6ijh~~iY͸v):vi$&C 兿ո&ւ#(Σ*j&U;vZ/@[}? "`db)2VhKo~g?N&߂*%S!xSVJ$7b=6sU8EMX]٦d.6XY{bVF%^%\% DQ^d9iW5g秷^db%1¶L@wW;ОGdR xR+K`zYo_͕|ᅤ_Hf˴m R0.6 յ7Z*v o=W{י ໺" '+W\2p.ʗ0)<2K< 0p%c$ ) G/,pqӑ#g3LMNC{\7uS-w\Nޏ[|alN]ݾY{HW]M* Aά* Ŷk % k(bG%hD1!'w;pY|>Mz^/7ِvl<wŮ⛇ot6̃Z Bkio Γ_x~}.Rx3>yg Xq~'[L9?O ~OU>ݎZf[QRB/Z42 9zֺr.B3_>h7Mn_ex}u:oRmxl؅g˃ Z0vy~>jEuϟ^`OGx} 0zΥw G^--vAG& Rk qTq<z52W`람:T\\P8joӡw.6G?.%E<2xQ%MX)蠸gJWN0 Dh|q"@fɒ G)O M(˖H4LaIxv}JHLRqWl\K-9˙vcR Z霤/8999T,TLAHmb3CIA2W2IlȩgƊsIM$$xE!EE(% :)l]{M2~v 6H40#_dSϔy2h` i0w ^$ i k$UC!->c$zYβ5\q]A9/.D;2H!x1)û 2)VJ눵#X!%[S/$2sH]Q2#9&͠b`DSIz&AЩYEAX1y6t0tw:b-dT!+w ϐBu]FHEb Ұ2 S*!0mXEzI; D/^G,Qt Il`@U$̲n&,ϊڃ 0!K{-!=#,\}f"w1ejfL|"R,Q$:peH WE7 JΘ Å pk&7q n[1/Yf͘ @5bl2TIbYjB {aZ {,*3G>n:@C6+*Zt)\]h)7TQíɀ< ;l_jTc#xPr ($bDN W$2+B\/~XeGƁ6S^\@$6X4_%AJWAxJ܊`P۠p]g,PE9UP# VϓW14+JXm))ಒw2<}XQTvj z ֡TXD 4.1#dn "e~81PsJ@m܄LeȍL WjmRcPY|r\fR,JH#\5%i\4 Z`pɑǝr5WhS*R3QU=8FD PPX?˙̀:5E zAu!֯FԔHb|d8bOVS:( qmJMѓJ 󫆗 .Z6l!&XKo&b] 120+QMjSjVf!2 .@,9vR@0Z`4bM4F Sq1yCGVy]ōɎzcwN;Tyk%Mڬ :rv~^ kvqEԎ+S%n^ &pGY{~<~?'& 60`}6$ƸcjB6ԇxٲ_Wl,ncVۯQ2~"bmUBF 3 >ўˬm@xT*klUP?¶_4Fx^tx:C_Յpa!:w ǃwU^vot&/r)6KlӣDF/ǝVVf(N; GѵfPr"ZGYG/C`#q-Ux{R[p!ރgVϖxij%-lgK<[ϖxij%-lgK<[ϖxij%-lgK<[ϖxij%-lgK<[ϖxij%-lgK<[ϖxij%-lglѷψg  ϶͉{ZOkV~<[ǭ9|Yosm$lsO0nmra99eռ;|wxdoQ|${: iF!uF )bPumc&a}"]gsm]!s_%L2zJ>zNjX,lY{- t,$^,''yJ#‡g-= ہQ['nvsEx<yF\E5qJP;Giݖ^ŋ>?[=om% _^=>L샡| oge탟&}"^MνY{V{8d9}W~ٽf5n˜wx$T˝ J00M%i΁ G(q~{ ?( ֶ V12W/Љ -2%Cm;:L&GcՑ䙻UZc9 Q{7oG TMw]uD2[໒C/;]7Ra?%uWzPڻnYf>c`LʵBgܠVǩfg]{O>٩&.( A\iO{2՘Wc!V<' lјhg1k= HcKИ3-L0O924-I%H#́h+뤔S5;r2ԜR>+ճ=񽞃D >}" $]1+-O TiɛI͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋ԼH͋Լ>Fotɸ~w*%/W'=[t[(0C7MiPaO"nHim}%T:6x~;9>P)Ʈ":%YժKΉo*B?A/y>ZL/BC$9t?# Rg14JzPWj W-H+W:8ι *cp֨nN7^(ghn"qrw}Af؞q{ W2dl ;MvE~soM x:Th;OOOߖ}ìr\I&9K#_9<岹=Uܛ%ЇW'z=&WcDM@@b3.;b{o˴rg'{e*(|!1Vb(M B[|\in :37DLӛ~6uM2?isn^o[E%Od^3uu+ek}>xrxQ'V4v^շ/b{pՎ?O+BRe_Oνۓ4#O}ӓ/w+V1t^gӷmw&jNO9amYzOR'=F ?Ɖ:+ \>F[+L>nzy亗sv]K=u]^tlKe3FV{_O8N֧Kį:i3|O5+o/iqJzo2-ǗãωXc6I)b~2A-W\IZRt]YeU <+N˓4=: _|滗~]7oï߿Wopo^7/'%!8 }.=/xy ~1_y\v7>8w+S&Vn,bZXr[:Qš-nV7׿@ v -(P?;vW,'ufI?˃ޕ+bS\\rpI&!@pDz#>Kb%KX6>MSW G?Quq&{XIJNLLq̘dfL8e!a`J JuL5!iD);мu! "(8sMYs%RK.y)vGG-cFiN]o1R'#tKʎ h#|;Z:Y|eCW0`ou|+{Ka_x)W^%O}%<1T4~ƟK]9u}Nb?['H |5T2R,qfnd&m` ,Ed`mʐy T%PET0 upA^Gxgmw0por}}pZz8Τ,DL Rv|!hBhSV>M〺zlouʠJPɒWmgɾ)'7`05B +'HPQ9$Y0]̉VQ(ȧcN'/e.Ūke9}2QɪFՊ9[v&#`x箈Noah֣jP`4}k+-&v{}IRRm,ذf7LO=x7s⪭\D%m4wqWˬwftd [Yh/dq9 l8csׯf3!$8ܺσ7Oޖb湑a8ozw}v3ohm\wq&.ON[|ﲩ}53֜>t +lq ci~sWB%ZPLy˭͏d{JT&Z#_j*BCΈ @"S BVgwFt<, F!fJs;7ZꩈTc4B,$ M*lz`gB{j,pW SKL=L]0E_ ëG«{˕&_-_ &"FWK#:($> #n]K;rT)+`qaZJRJui25''p{IWj{+zdrjK lS}xm GmoÈ&{p(%n>fء##?:}uICU#8wRҪGl>Fe&:5ﭟQW~ˌvW>ڇMX(Gh70_0[~#\\jAzB]Xl%eLMOY"y~"l i 6<|`53aY&`#rOy).K?9`c,rDQI*5Dt2"3*R/C1kBpʥM(릟p^DPฑgHY0crYDmp;TjViT#g3"Cx-|ჲ-nV͹; pL@pCQ!FLQxcƘ̨d)CqtWJ!j  ZavekmƵҹֹ]mo]vmѓ+ةރV)"^r( ;;GSwov;CsP*BE!.YX}x_ʗ%%ww/xȧJߛѿݼEi2v51H$\  gY:dlͮa* h% ٬4Cx.,(E2!A ԐN:I$mvؐ?9Ш6YN:GilrY钕$5Uto㭻߆ѬuI ]1-5P{hVP"UĺIVu9rUF6`\ɲf];1g3oXDžϽp_8n_sm6t9h[l;u$l_iw3->]C8 $k 0 wt:Z%|6 Eil} ҕ:_i0:ЬxN/<|!ٹG鹱5 P\*%bsVap٧(A *[_$g'A?.Pgv&̑L,Y[hKS[MX,X1Ғxkt J1It /%^'Y6qg\@KkR2D'R(", o#Ą23܀LIa"]\]>xuw{ڼKu ' OC.1ό;Ŕ:Xs߃2LW`*YgsI3ГM&'Ǵ\% UUjlWq`0+HViLĒ_)&βEFwcgm3;uuyxm8 Ep@fORd<| 0d{yj $ÝL;c<'9<:@gNt͍%>뎿,'wf_Vlx$ c!Ty!(.1*&ރȚU*K+eDS?'?* R461ĜXGrGyPL"0J@HxJP:hTͫ|8e(#}{bHos9:l/pc}yl0vBVMusƫ%SJ bf Ҥ $.y.X|ET RksN &IE$j}Ԗhi(9[fd LBw@pkmRqͭioo.׸#/zSyT,MW#n)- E4BFy\wf/g\PIEL^kIx/{vPK u83+xͼĵ"Іkh"pEX1CAH I$qzĽUBƽN4Q7^ﴄU[Xbk^EHi>pN"΃:GS<Ĕ)9S"w(?tiA߷`B $D12.*BRJ!Tq>d"M6uTWv$S.[zbgy\'a/-Z58q~N,4|i2RNÄr.ܕAEF`Fez=eؐ FŒ1Y_~̠1PS3Z+/*\(޷eYޛ~7hQ9@zɻTJϤWTέf rh? Ϙ}:6p1 wO_O]vEҮMjZÑ)˰(/ PL=0}~Ľ4 ΁gY{ȑ+ t#@pd7m!k#K$;,_,ܲeIJ)HVksZgedf? s_/gՃ`d|̹:YT/vP6z}1/2^bqa!4#1yaH04,o} +V1^r`mrJ6JQI68WɥZ:K40xR3_1KSgcczO?p StK#`RuVaO%/]a#+xz^f]Kb[*SߕSיL΀H~x__oW˷0Q^_Yؗyp n C&azh8 ͺumU]Naܿ|ݹ/Xb?A.2Ƶ/Uct_>2[qC&fX<(n."R]F=iR;ɏ}< #[2589AS,QDQT2<_7@ OE~n6(r0M;$oSfb$__~TE!5!0ik%@(@{̾a;^oFrw:x@<؀"~2>GX^,/-^1F(b>IfbJ:R qRpUŪ0(_*!Wd?*k~26IROf˨&mk8{H0ڮ J=t S"6lפj~O0q\5Ng6Ы4=Xtrgہoi\!rOn)1H-2}=e/7;̺1qq譲Vo)xE݆f*PvZ!"vb"{اL~^J CLDK9KI4 uNgV'NǏ$HgtYKyXmaQw,2߆27yO~wx uyd:2%\`җ,KqK ;dIPx;!J3=]Ú_ͦ0nF&;l=3((pgZkR vsO!RIˏ|8gS܅XSK`93):eb5>CHS1 ̇RI+2KJCG*|UJ]%޵r+h |b '5{`ט#9.Zerf;:]?8WZ2}ۖFƻ;yv|H؛nZι ΃FΡkSꎎdk*z5M:5s_osjgw?l+J\7/ռ'^YɿN\tS 8玪Ilr+ϝ*ew}^IcUґ[)FGdR8"PBF=\QKDK"XIRNq,z̝vX-"ro0Af/hXI Jzŗ"fԅy.+z$ݵ:U_;S 0싵˫$j:ܢ^+[hsqE* \R,CJPn&D"[ rH# be}0z)#V-a$EVHw,#vN˨5.%"zHO67u"?io.4^ZL90{d"O)5S%Aq@˱dIhBꪒVR# ՈҖx=}p7GJW:K[ztWs16b|+:tWEYd|?pJQN)93Z'now17Qg[LR@~ ĭXl3Pcv`7#n8>.>F_ 3#fq))np09sH[G‰ڂv+ha[Ƣc\'qeڌuNۏ~ZIyXe؞r쬅 $g\Kr1+f mV; :kdfnkڱ#=b憊|T/ /"kyҼ6l;?(m,cnsoiAU.!JEZt@O[r ' fOGkuO tuzτ0 \~05tӫc'#ky`wXW5kdGm<.tf*scrk&Wp:}/`EgUkݖ3*ZWlXVE[]tm|;zLMxK.wVDI/}+}}GyOz)Q7#ގ 'F<|>=s{xCD q4 " 9/E Ț!"&PmQ0hƬ4f ]m>/&L^)\`dvwkB78ZeN0UVcVCyC`tQ 2z Iȫ6]tyŹzcDw0qmC_!MIꨬ6=h`NVQGn|a_YuT 'D9*x+LQĊY\'0J[F(Lp ◰7rg,<O7\?ogmp3 J_}ŦLG eZ30k1ƣ5ͭ\6uIY|F') S@CTBz gX[R`@Pb EtIpe -,@Q;7 qkrT*ͽ)uM(i:W*Og:Ó0@ݢxJ*huN΋;xc; جF톫JPP\ywUVT2RRR&JJ(s"{$,,43¨h-JR>RLGm7 ,3Iu;g7g. / 3典. Js_% 78jvv5sy_sl%+ f Xp"J׃ M`U#J e^z\௢2$x6N0+`X! %Et!"'Ug7Ƕ_Q.kwrm1=%.XUBg W0 ('@{'HcQ06?hTOc=]10BB2Pg($zTXҠ0I̥l]XMOi=vHJ{vS,j .ڋ,Z|S|"IJO!-^ `R-P}-ݞ]LWZz`l-n%+7@dQ,WN}iUp[U8t<-[N~?Aҩkvh)X"DU~/mWȋevw S6MNi>%[i|T|YqgeO7ިmVz1J^yUi}xeж8mW† 1_LNKӺ{gaJs`UW®;JPjҳ] ز|0,"#P̌]ΌϋQEYK6 [ÇAptG[s/ő ! WqL IfW#")}L&W&39K@ yx\v5<]Vb#1-wGT5KǗ;÷]-x1m}H0凃3`e*/Six}[C(v\m0zmT* H(wR D "Ⱥzm]a Wl'6e_K(T>+jk )IyE8\0EHrP G E9;+.<ٖ{:Z˪֭'==yv1sNͮޅ|["V~W6nAO S~Y~.[^ʵv>65#ݹrm5WWZk#+Cn[츱S甫U<~s6z3MO)_?E'Txv#C !CضV渫UPb;k+Z`mI9bC M!qBǠ4D&X wε[vHL(4cAwxpL"ʀ dhmFΚGb3>=ӯz (ɔ}[+|{\^mK\zrWΈ7ِk:ii4[8kf]J(=uO4b##ybѸs=Az#~BioTy<:}OuLrMsOϮ[]rg?ƗUEkw?ЮUGȩS=bX] ߼!}ٿ~?;Lwoǽo^vuJ&[`_Ԍ55L:SCO]/:6y?!s/CB4b̀"pY6k>]y[1HwӺF-FlNz@*ʰ9ylP=\Ec$pqF]GODk.2ۅ\޿#mYC'8U$-:R(x 10ћ@AΨ{nH RGGĕqOgjoLID}/U(×Њ,P !T+P8ڗF?Qvx?g=yc(jCf' C=tzvV/o(0zգ/Ot-_$G M.A8LOl4H&={@O~d%^h2R))GĠdh3S 20&&s D !$А_fT FmMQ娀%퀺LxEd P:4"I<2*9 Oxv|r+h"5zNLI4qeS-e,gEUb|7/dߠds fRW;|R¤-8:|B nh*@SZ\Œl.1Kh%@t)b :ۃV@^J h71s<C|fTmAL&ʌdXGh δ [#gӟ@3͗`"ρ\'MO.>mVnjLpgnJȑ"Hx*ʹ1Y+uDИ 9J3[1g>\Q Wet,n-1 TtK,ٮ/cJ@i{.NmkVu| e]!Cq+:1MMӌ 3A.1 1Ľݽv:I?"Y߸cΓMZ?Ng=a=GQ)Mꮞy{؛5{`6C]D}xe8wR<=7qKe3z5y Q4NScp@ :g(@,J'TYíN*dkYKW=S~>)8ZpdCCR V5oj(~cɁnD[QB"ISy#ŸZNѾe_kxmb[h}Վs]}'{ۦ*t|SvYӤEoyX䴴pG\y_T%' k/8ezxPZʫet:΃mʵR^s&MtA;H oW3)&C┡^{-THƱ`c)U1H*TM֌Z3.G)/ [Mu!tAE٦׹тãb²f⧃ůn(`04O_ƆbA6kRxd:Zf$1%&SL0_A}jUcs2dFs6T^8-3A%4` L>&nYcFjF0g\n;6ڬeM;!Eqghf~u)if"GoM&HF)!9zn=$iA-dDЊ<*5!/YdK1&օ$2-YK]ˊqoȵjD޲FF4b %ֈAM,e:Ixޣ@q(x<_pvЮN:I{I;h ہP2JcqGu{Zakuq/ LMBB@WVDTu@Dᢶ @8e]@Ye|N4uDZS 1PKǙO1gJ*S۩ܭBfh#-}W %B(ٻF$W~&<"`/f5 OIE$ՇK$CGRdIijUʊȈ`>'>3G>] p/cC8?k`p&k< yfJr!b&":ہƀ̀v޳1[[ilܪĦ^)suaCbS(Ѯ̔&N͗eo_ǩS?,s@,R W/8#4([6񻛍Gwٻz02ؑRdsOl=mQ[|\>x`r>{X |KZYΛke]5=l:2R1yZwFnrFWeAU; :Q8y_.6znJmF_̈́sNueOVuޖ[ >\m ޞ_~c{1KBD̖{. #t^S.%X+V$:Gd}0'ra-XSxO _f|4O"%Fz@ZrEt9A@L PKd.K:7:ټC|̸#u@KkR2D'R(", o#Ą63܀LIE-\\'\P@xw9ZOi:]ƱCr?|V#fFjR=x$CeTƅKyKȍd7{,PomIqBpUpZ s^Ws*Yg&ɍI*{7RcnTq^r23 Xͨj"'F{ipVs7,lvM.g<=|q1 eT;bjOT'XmSLY58=G;(n V%v18 8<nrrLkeP¡Q@k&a ʳހIޱtu弆ȌMlλvbV`_<}%ϳZqnRd2רH`,xO2OX-crpoc aKiU]9 -\/гO~wLRRY!T]M&'j4qӤU4]M&ZWgzVs}GL' hAdM@NXO%s̕j"73]?* )1y[xT뙏|441\(!!:qAu#.QJr%eTThCbF(+JՒJ qrP| 3|'!P[<_M}C5ǫU]z@Vz۵>ꭏz>-zɎaq7J>ꭏz>Hh6ewv3%X}YHie[VzJ;<KeG>A"l X 6_~ෙ53R`aX=I:8΢淵-@˙h HeIOy "$\{gT^ c^Kٛ Q5x俽6yX'ݿ+d;]qO'6C LyP#eTFLQxcƘ̨d)Cq.1 Y#iDXւ!Ђ#Htm3홄sv\8xɺwcS"~tm$Mt;mzӞM>48NjӆRZZfKdS6?=ϔ~ KP>B@Ӆ>/. nxS3{aҜ#&7HM(xsr4}=_/o.햱POg+ȋX|;?;׋P׋enݜ-0ln\hO*-# 5XMM ykDxoеVv92Xf3f3 ϺYĆՌ╍>0l.OL;TWiLzҔZ6_e dٶx:Mʝmmy=>g'T imps ]}9s~E*eqՀīўuX\ ω3KVx&fI_L`-y\n~bfO1{MSuJ.%%37xlJ>eqdDFpYJ{N$?o(+)h2RZQ:"'pJfF#lV4PmU)4lC][LͅƔ2z J簉ѴNiTV? o4Wӓp_PHc0}^킒ۿ:9;͗^rJS~Pهǻ**.PɛTJj5ƕhFɵAsrC$cB]dĻHY0Г F꘭ѥhrSTL ǘj#c5q#~r,BְSX<u.$v_vm=xfGcڜ)@Y[4^s|=D nAp#20ENE l Fݣ XC4hS x-q#sy.]M;ڼ2jڍ&)G3:nrZ]3:lTYQid4ZKɑ̡ۺTo$2@/:kbAMPHcMڹxXMx8uIsQTFD!^-3""J ٬4Cx.8$(E2!'A TFjFčb\묦%EpYCKG0bf}{0ԉճ-ONg+w1! eCp5AH5Zjt2 }D 8u1r @gCeOcFU[idSUjLՅ,5TūvEk zϲL|^}18xzab۠7Lgwq(,?׃瞾iSgL'A.f@_ЕvAɶ11-.Hx7rѥ|ѵ@gpd5=1APbU΂+QvPyzhغ)ma\Zs^'Wf]-=>\{ ޞ_;Z%IR"f=qjV:)}DPLRx+t\\nxm.̉\JOb~Z2d|riEV5%$VjޒeY^'<78~Xy.phiM BVPEp%1M$ 37`)SRXkȱ넋j{NzHSཾ~~W'k%B=UrQQ=Pj.R>K Go|>'kazk$8HbΆ5C#v3ey9ܗ$7&8J}Z(jTqUIMef &gQlEO,NSr֗nн=}pmȬJC̽EgE绍]#cc]/pJXLs*̭f1Du>1ό;Ŕ:XsĿCa"U-8X&K[,Y\L28l.d19A 1Z{?^5q[Vx4Neco/[Eflb#pމ}Yny',%}q>2Q^j^_ Q,o }.hS5(_0 gC^NgonCq/ѷ l=?)Fe O?LZǕv~GCb*%+wpBt"]gbYq}ܯvLK>)6m:?uݗ},jg6a"<8Nav?Mö?Ώ2 -nxQY3 37OT!2^]yV6 FKQWF\>Wo 6ȌB"R*PLF\!f,xO2OX-crpƒL N2YAnypysCͤ4;eT<yrURRB,}$bno@@{"kG rrz/c~tP,XJAĤi p+ sʠhRfRQ"\UJƅ.RYJ6SP<ƘZi*rx9M cRG-@t>$bbV +DnO2;x5ʂIF O ZbTмlGsbϔT %/m][oG+_6r/uIgN/}Hl+o̐Jr$(R-vVUuuUmu$pǣY?zyjPIrN&(c3k\ Fsݢd,gBSA gNϝ-bR0eVeH #vNK/:?ȵ#8Z MsM;UpM?_77Kfn=msx5\lVRe'ZmJZˌ#aϤ [_32y*`Я$NA?]a~}NouLNRij)V0GQ^0˯ `&u8=Zr%1U +qY<lwo]xP}oyYD}X xob57MqWMC{˦o4ՀiWwvq]kW!0@]]kJ(uc\>w1[ShҀ, Y.o`(y7Z®#gXk&2Fc/~L/[ *$(N0pI[u$"¬Ld@Go:z֑$DVMsPZAQq`Xm@IQ)9TPI(L"-Ƚņ_Bur 5$kjM<*)!N=ʛWQGeopVoe^IM덟O>bER#9F-*_ ;?uO\wϺh 8_FbSp91h&)"D ׉20&&k DCp"@Bκ55.ԖڄQg4YNڣ3vv$ qdڌ$>@Զ 34.pq!UW@/ %"vgF$nG) 61_94W{@D[ٻxU) P7/z^tpf5xy.RgiS%c]vXE HE$ (XsDL\|ϕØb?UH#c-1UXDgA9/&):Ň*Ff4Ncpi9C8M0*PkQ1>쌝w;UJNjMqp;9"7Yhӛ+ s2_Z[+oApm3|Pfv%mA3cA:ۓLMW}rwHLnem>0꣩@h;LRπ )cGj#]?Fea8DWuvYjj'mxQxZ!G,n,p6k4vNq˯ \ܽ=]Z8b˪m>tGu~WcY*[o`.; JMY[e&%In$<ӈ+b\ /gZ[|W{%Xt,#F0zA` v,TɒЌe,5;VMjG"qoeWiGuv;s Q;+`ӱfv\`dn2{qzULN#3lɈ"<qeGRǽ ŕRs" 6Rv**s񱋫ֽ̥ ŕfXhbyQT~0柿ۺq׺S*)\@A҄D4w+e J'>b1u(7bZ;XTéj!11>nI2=A(g[6Îژb4[n2MZ=8qݫf"k+Z:Fs[/[XӏмmF97.շ`t/]RY ҍ `1$1? >ipDHӮCp Lmz~wrsOG'.F0 >_`Gg!I"o R{miԗsr<۰ٔ(ߏڢfF6:!3AL.h-<Ǐ0Aw]{Ƞa6drc9"H{x^) cy^\"vGFZ i2 +h,( EQB3k18N :gP4DX &gcx;-=',krFx/Gsñevx1YU'm^ނs3|0za8kA3A:ۓLMW{rwHLnem>qCSvҙ꥞SxJjϵQY{;y|Uh趏fAj"ڞGnz ʵCvh,u6k~2N \yx=Zףo8{Ƒl /;R?_Eɮq \l SҚ"#/C$!i)0% kzNꮮ:dJ|{t({Dv=e0^hGjpvbFz$R!L`A舦cP", ;F\< n{18(8N͹2(F2  bl$6!3|)j.HW8ց79̻i[v櫛X3EOmuѿ8kx- w8nͦe;\ʄXttf;}7#?s6"^qsjQ bvU7ZCo.)8?ybH*ME7_s0οNNLsO /}@Ie8:Kd$ypJ5:yr0J('^(ES;k׽;OcTsӟ=* YG'$:N($|:1כ;ʟדEWNT66б^Xԝw[ɤg_OܯFWg'*C ,0:[]MNр\Xmű88R㺰3M/ӌg=]7Ɂ7l0\ smo۹"TޏzKۛun7^!M-I-]75ÚѬyeyb_c`<{ݻiGۼ{8[ed}ӇljYPz0hĩo};DJ<#~FOi?^XySs&gSߖ:?M/|{5}<|}sy*Kƫ1O^W:G8հ'xc$5*QS1~īt_\ޝ۷wߜ_ߞݻs9 e}$% !fT޴iiJ>÷iW|vܮ!sC !9nDi^אhb%9 Ak}G2)udop8%?@ ϛƾJg \xe׍;5oJ/۩ *O&;gi QFTDTHE[%a.c7FVSԳ#S$jStOЁ͎PZA1IZmI1)R & _"C/ʲC)PEqt{Hm>wbN,L-Ev5.I{؞ISf]F-=bJC*-ArhawW \4&KxuTA҇@@#fG G2Q1xΒj}dn!&_HއU~Bz+1祽6\|{'tUDd :4"I<2*9׹O:Vy>DTU*[-bgKHo4|jdt| 6mڽbvtL#O a j\򕙥LvJ"2>NAQMh{U)a[Fbɵ>Nǡpy!A{ ȍ*Z-q{r{+mLN'lU\U]`ǐJ4X~Rg~&sU J"Hy*ʹ1]+uDИ 9Ju35/5v{]۔-Zք ,,ú|W?aS;~s6POUrJ8kC]jԗ(G]ݪۿȀif`҄# Fsuc #xvhݛӿ]B*E蛡ˮJӽr}ț*>ϝ _֋R fZi foS-M.|5oy.nj+6Wvi(Θ4plS[]ɣwk+,500 P{`Z׷]<}6Q0ڇ VewP ^0XYjMN@+M9ITķ1/4^|Q15j"}|*Fi&ruPH`T&hpW[i2^>}W$Q'Gdi-7.Hw]T&H?EQys" NGyPbZ+.Ba\[\\[;SOJvH&hNGAg$JRÙE`!LCbڱ+x(v=@؆MDQq6(: r"QG?n׆C>S8{ Gv-va\YReVN-┵amY- jrP Ag3MNDtb -wN`\,;  4ql 1AxGU̺e1B"/ }!aw(7oAOլ)6f&眬>%=J n`زgbGJgGW7>H/qPA'. Pt++5W9XE~:RP"T=5%bOGx?** KYKRS!8MXOr -sa=76pMB IQ9?SRJ-&fJBDn>$'@ {&zl3QCT33Itv;|NZv#qR{@vQK& Gr+="`mg ΌOMzN^9ɟ' ;#@PF\\9[eKyW5Tu~ 鋚';Ao8W@Ow)n: a.AܯȜvP;wLԛA3!9ί"_W(0pr4pŝPxpppRj[zpŌf* UWñU֘C+S + TJ8"B8*s,pjK+A@jzDp+~M竟_(g VEU:Vɹ ̷vd 'BZc-~hGBHR2Q\`-}T{Lӗc˥bz,us6Pl咕p9{BȼuZ*4wқ2!"3"gL$9[sy.ԌBl orZO:u(>px2ٱd}#c\ zoe•+?&ѿ>VBH n%vUEV'2J@fVDvg`l_b^c]P٩F7,RA޶.7iJr*ˬH2v9vC|]ty@|˻(>" Nr,~;9/{l־G}א%UrUOfOɐ}%|)\zڰfn]{v(m7<&SC _eTO7xR/b/A2ʁ9CϮ%L/2Z, y.YoX$y8R/՛y8L .OB#Iʶ ~IQE1`m۩.N.4Xe :iIVj0“!T@P"(% ɡ2ӌl=vг~Sfu"VYDࣅNCXGr˿VׂMɼ4c4%* Tfy8(9B1{mA: p G~ty o5c%,;I"@,v2QDɄYSH܃1( EseQ2sPZ쌑AwxpV[Fp*F24CB Ŧc)9 DWfp3 ,e\x=ޞغyWwə\_6.wG'x\'H\*\sG*+P۲ 2)85| !$hP(g"ZEQ͋L)38 AhEiVEFIv=,&R~ :Ļ&Wby185DS,gxtR)8S`>Ą XK'!c߷Gӱ] |ߜr<@=^@MR QJPź&Q272RYFڧJwtwτرxY?[\;n. 5J֞ivC&QlKSF$? YefZ.`FLsIm{JߟMO>vVleYl" 'Jr|QŽF/Fcwgw䍣H'>N7믦Ư2~>G8ߝ lc;]νtpgݻR:s9.NpYϖKʁ̗64WOPjϪ>v4mݙ4fF6s/5j̝^f<킡IøvxcO@HH7bHwMða&kY$Ȉ!̻OA8Әݻxjͣ4jZD֣eFHX u0ʯKzA;mN{a~%N-;=|\qVdgS,A]L tpû~w.Iɬ$ KH,۟< 2~LJ__?_z/(Wï?Y|/V(y WL{C3Rjho64 %W=]%0_W5da."D#4مɷq򢎱QG:*fpPiik~t1!~֨<hͅWX?gc+uAecs` NI$iKT$J5^MQU:2Ux_4BĕH2ΨߘB`q/`I|Ne%Rh/\B\"eͿUԙ{YoV`DV@ib< )͛=Dky$Z5o&ɕ>mm*09X@}OnLxD Pg0?E#B֖>minKЁFMib\2TE4Љ CP4&& ȿFM$.} 40bp`f JqTmAp,8,Y֗~*65;y^t*qvZ3$<Γj};jnRaUM"2ȝQ$GA>'uAW{"*-aީ6t5]z_l**,b\)D0`Brm ʠR(@+9eh A$Y-gB*>+jk )IyE8\0EH,yj!HQmD;MMs M0*QPk1YΎn=`xX480ͨ+Ԩs0<|&GxWgS޽wqKvƩ6ս@hs׹[=i2kgRa [ϭIvG[:uw[VزRng=?52fZn`͎lOkޅQ#?Fa-&߫t]WgoNϚؠ}j''>h*ts!%cuV8$ סJ:`Jx+PVN <6_wJ*D*N zQ` q"TIhγeHJI@ZXh.P፧:c\k\ "d#` y*v%}* y Yq^x+'( gvl:_ka`2yJf91-^=^U <,nrЮ2z\ WVDTzeAGd{:˝YL:"7~AS%B)h Dp&VXns* V $Q6`mml]k)] q{qܻ.^L;ea'Qk_(|y{p8-d8)`]{{x˴"@"BRx$p@\ȍU&z$ y2T$d(SO+8};kw<ڈ_3N|NH;ݣjâ!+Mg/jГu)2H.21l|^CNBFwL2AVZSí1JSN(s/.\jZѾtRΨzXCXe!A-F'g"NZ>蛊 (*'tYZ pVkΤ )(Ĩ/1GM#ZX<tH2=?*$X1$R*&R%clJ1YX2oJ.[?dd`Ȝ<ܱ>qeoAe{^,!)XZa̵ηȰ+IL  APAh<nCVl`;8bK%82TBݎl`c"^9KMgv~XP1Ej}6+,i+[9, 5LҸN0c4TL䨰m(%C0\oʦLB*PCFdHhϣBY"ED2 `]H"̣QY[XvyX+3?,E"}%"/,y+[QsN ۔&eDPgTs Qo$.xIpfB@CsŨc_y( C<<4tȭ7q SO(jgZy?J,g[ywmv`/QIP&s*FVEjm r益+bmu.8#ZBj䭅*2mu7$ ڢ\[s՗Mݗ( Վ[_%TJ{N*#=o$;-Z-jM9wmmL%JF UNyα:[% WD$%ۛ?!EJ",YtwgZNvYd?'%1_`9.:E  !Ái&b{B+!$O6]2@ۙ01a(a\9q  ĖivG^A < gꤍ)|)4ĔN:S&&n$s:Bg|S㻌}t)p@\`V <ay"ge:%*"sP%1y w]<ŋE-^7*[J}ls _wZ@f~_νu3lnz;z7p.S&iA6[l#<d RD-J9+ )/viMG"{)JUr9FSDjbV|̜q(p©TZFjJo<;& b!BЯ67ǗE#4W&:7٧oļ [ M(ZBbF\$::`482D{<+ޭj؜F.,D#kr53HR^*!yY-o@8-Fydk|~>yخ,Y˲It󸆘ټLyKa33bg3ȋg;zp(z',aП] UzG-R;s?Js38F(S#eE B%gB:G~XzvTlUー]nwIhStޖ[\y:,ݍ[J74_GŻQ۶+?j\9& L&j}}հ?6PzUpQPXv^Ӌ`rcep5h<çk5#Nm9&p!C8<9ч6WF@,PR dEYrhssN!q$>;HBH{&m΄~I'<=)9 MB@9T$4$ zâιv#mgõSb.^?NNflgEjթW?!YԒ [.57SWYfybP(=2 Vm]B//E8FE1 TqGT)"tU!Vwׯ_wyB_|o^_o_~L7/q-8|X} <_G|wW?4#m5Mu~En|v./h5d"T*C< , W_ԙ%u^j#ͣ]2*)d=TUXށj/. s$xbd \zm8\~-?Qtv&;gi QJFTDX Lrkl@&!m{G&D}yN4oO\$pF-4cLI-CN%LQЊ, oTtPlBroRx ͌j i}Э=?0:_x`V9{0`w954U" zٝ4Wӛ lNtpe9;D|7;oT&X6Ԩ@Ka T  2: N)J\s/8:H\$3 ȤDdhXhww7;nY|9%19>c_Η٦_ȏ]蚝H,6ߑmC{\-û/x[~j#q;-P Mm锐WAp˃靑F:QSGDdh]* C(P:YDy>edFDFI@y-yS,͂ڄUdAc . dϵ3Jx0<_ +2 V~g]RIMD]J|,42oI~>^w(GE-7GJܯ{<Y~V{vlkƞ7 I)p6*NCJCM\{eH{ş_1L7"XĀCN'|(6yT^ /^:QE0JY#7ŻA7Qq_3`3ߌ ըP \&Uş ? -Ƅ( M#*kT3؀szkQڵne]'^Qqh:Ϳz9|Q@ŪN\Qszs\ս˩#tfo4W1dڰеQh "K6HOmww|L9Qoe@ mPD h娅`\l7D?yrJ=.!&jiؒ`wE OuZ6C[#g8FU[Ndcɞgc,OE2 "m\cgcTytԻg|_Zi (hi7dY9DDl[mA"q?:><ǃ3,Qf-&*D ෮UHĥhI빱1$˪$nI$%xsVY nk}Dahav!g^qg xn`ݿvgWGɟP $d! Sr-=R)AA,5[j &iyynҤ-m@hSy(-a[Ffu1-fa}=lAQ*(U&=,(T \#T Nz/CNyoAA{جKmҾ MR侙t$-3YGU,!/'O)uvayZs`j_&xI#= 'AoLÖdm z< rVTp+R !0\Һt='RGbβH J!ԼQo`?Y7()&=Gmө bdF/cLk#qdV k]wX*,S6M0gyM]l~M{լ\_>nYX4h1\c+ &D!F•P_˼\Xo#E#iHΞѴMlR!`Vk2oKLBDN9Em}b6k]QԌڸC΂} 0u<$όzq&Z+5ӁÆ3GFJ s iU32 hG9AQN#(hԌ8N,"cS6i͈H;DA-XcU >[`%-w ! >ƁUo/P3d? cA׉*LO^{c &ݕyQťWʨ0O%u8{qBqbJ4Uo<Oڢkx ,urAy+{ ܧe,:u*>.q jU߾C|"Xt]^UXV0#)6gF;]W{4Ubƨjl͗.}~swM/Xd> ɾ&:ʝ7\ 30"pV{tHbaJҸ}owΥa*םLgg ~iAJ5&p_Z\>qCqh<~&>٨M/yCs)D3DKvIIEz)_ao?ϋd.r=7?d7#ulH]Yݗ3H+ItG'%`w:|`tt[CVsY,7< "1e:;4܎3v'J-*^Rt bHV+(sCpƈzKMّɩGpbhVڦ{xN{*u=@s/~ ;x@GPSi>-6  bF`tZo'ӘG "h @QAsgѩψRHe|tpVuqumާ(l+:Xr显r&d~(M^J)OiKXeYy˳wmGE0]jr; wo_G>4{QɗazQP&[2,ӽer.? gt^L*i[\iֻ{˨)\rbk(mަhe"l^^0cjV~?.^zvot;޹u]ʋhHcup; v'@ؿ;TlϳѺc`v1yWjfgjÎ'~P0vY4a#A29HysQ bjns b =|#M~\ZhYwrs  He8Ia &2e,"۔PfR ;VHEKRɀH"`+(" iPtF#BSs71G EέRȰ aZ)"VD j{E~&)aOSKѧkK<uii7>:H&X#6T3BE\Eq{y5 uG#w+rօmWncΤ:Ih*Ӣt ֤s$rb4<#Q+I9n@q2L2 `{66}o_˓3dDXRSPbvAGf٠V7# y4r:N;- tbf=.6Qk\D؉Jڹاb͌`tpUWZxJT •db"JUAnk*Q+P*Q);:ERJZW`)ZW@B-p%p+͸~c>J+L DWZjJ:z6p*^#&5;~j;%;2\mG:2\mGǁmThQu#JmW]Y9`ytj+r$m+V:Ҷ?\a\ \NIJ?nE0?z}+rQm+V#tJTκ:E\'h=ˏjߎZBWJJ;:Ab\W`J>A&JԪ;J9W8,W@5pJWZx Tupup%`%Dk n[ qt*1S+)@p*qW5pjޚTURvS+%$mJdH?Z\%rh \%ju Twpup`Pp+hu*֚DVƧjI!\=Yϑl!Ƕ#WٺڎZ}m7tWBU*Dk ȕ>3ʻڎJ;:A" Q[W@0#5pȥ-p=vT6-'+(Ep>zVVcgoGnGeӪvp$p˔F5=.`mqݸ)6v#`=bti0!ԕoa[̱>qYu݃ݼc+vʥcgÛAma@H/%FW?> x,+gf+(qYbF|pt0*&1YN/_s7 ;JV'5kMhը5ĉcՖݎʦqBOQԦHf"X5LpIJ+XM-+ X/V*W@ƍ+R@WWR AI*ܢ,< EYxg%*Y4|p8"Zq=X8pz*lhF$p >\<\~~-7]\ۀ0ztU_rK?|~qY̕d}M~O2A?<u,ڼuZ1X'?s(/Ewrh1M[my S? x]Կ-y!5DZY5+}p!Zp?jL.{%·\V,^z埃~1d/a3!u~m6qeU /'_7_4bū*Q J>xZL ZY5VC>|h|}2 6}3FRfEoQP,|LGO<3dz5QRjψu`4!F:-HbOhS岇)e`kqd-E|Zsu0%쟜*KbJ&~SAsHe]<Ӳ#H AJsye !>xõsa.ܗ {Λo&P':Ź;goI5]Uګ1Eeya0}`<YGW7u.zM3VF6:kd[mRyU^{%OQ}ΤBb,ή-޼龌ǩ}z Ы_^\pz / <e%">R` G͌y*´[޵J 5_M{&s4?wg?g~˳?;̜닳w/X|/c9dMJ,w@A[UݫfY8Rߥ^e[i^K.bB b@ۼp},WK,Zp"ȑ+ye8US*~.4;Y >܀zv-1 ?$$be$pej#I?TNO#Dd*x%HQ:9EedV[D$&l#T謏 $r)+x꘤w&v>|_UD-$o1sQ@zFn SG*#L} `nT#^}$j6ג\lГmUyЫd1Rx:rpDGɃCЫ:*W>u ObLJ1䓂J1y9.$Ж&&Jub%Z\![nAo쿟Է"JINH:g)hdY' a'55SH^XNh% Ae  S7N"8hH1*OkbBdžڙ8 >jF뤇G3=k2*1w2/rف{Q4qހO22n ow 0Wd(]?~2=HjEksNZ1yWr4lpk&QGiBǷFvC&{.2½geɕJx2uBnG70Kª M׷Fu%v퇃@1kwzmֱצG}Dϵ0<0dY&e*YH؈ .d҈h!CS.x#@Ȑ dсF_ŀ8B>d8$(h:֤~-;ƃ9+#="?zģGSJ ـ?^ƐQK`a-Vokk(-IiR>R*9I3TCק?u&;_GX"i3-/8ţ_xbD:鐴A@!hހ$*B} ZQ˙C` Rڣ_x(3ʎw܃ p;7 ]mw~0ݢ)6(8G?pqp~"psTȻP^_zÎ.j&q%)XmXh*QoVy({򻸠T4Bd+RB*=DB|`)Eceݢ`&Q)ML $NOC'bN0p#ADb1 LW|ww- TCF:&ԁH& R 2)%R:뙕F$5>>kuH8`'xPw_oy-`5H6.JtY+kg30f|Nu IٓD,Ϟ'/]n+z5\ GwXn]7zǷ9{2+W?MZW7qMD&70F%SdC@W[Y= .3%ONS~ >ȃK@ՎY42kj H]]6] ozvoGՃw~:My[n*YjL~gƓѠj)VPxh|ϼH ̻zO'jcq=s{N'3ʛ26R":ɷDPJa8E; B:cJ`^9=zN⑯y[QrhCjKVxM1 tqMCh@sJ>K,hX`[FEL1Vz#0W@ˬ+qv8d!}J0RǷ87YeQ/T4AKWᎢ_p[gZ6Bi/}?CŹhZ7OKDzrwdIeŦdf8H3Gg=1Gj?&)r3u $ȉR4gKrUAB"/HD yaKʐS,V1 yC!-o EcB)7j<ۤuYqGkZ vȹjCDa6>x^gc޽}"~A@}F 0 C.Rfs^bpN59 3ך=TiZެ~XR{L98{dO)5S`%VŘ׮vcuvꞁ0/6ANZ67?m*]u W6Kingtخižl}TkJ\;Tirȹ5r¨CR}òS-: W"O{4gESstlYUBAf{l v@W{#cJ`0 SlT: OB;K]*Mmg&QhXG `| v3Nlfmm~'_.|aW(z$5<8Z-#GSa>1Np4N4B嚍;)'P ,cqF;83딵J05B %ZsyZ 8C;v;g~.ɻYɣ4JwU|{V<ħ@ [m64^) CA 5F C1%I3QsGS<(dhT=yna3A{8fV9mtރ mJlOdKx %p!/RsoAUB*N{&"b`:bHj9ADLZ:5n&U~[ck]C␥\xjΞ)Ɵx‹&!&}ƧeAP}'R"M~1:hr&Y4K_S hp1SeA9(}e ajEb`鿙3 >~.z}_+iM,Lk\ޔT} -^ؕ')®{IzfU6˗ln|tIs ;&n 4WѰ@]= 2=~;GLDU]90_7L4o !w=,X6ΝPw ٳ[0Gi#)V]5 k(rARd`z 槓;n-QX {~a4-%paCH(Rde@.(b Y/1j1JlG%t"?C{6",>AuL)M'҇,}+qD-`ѴpH~,-Tz D%S] -Yb7Cw^p9~IM>LrJ͙"O_˶k0lέ9-ؖ{FfOd2.Eupyf)ހY8̯>_ۨ,%QGie0r:c[t]+T%z%[aˢxya]Lۉd4K.՚2kz}`Ăn7 >ZYKY,DB!X&HSE57V"FMsFV;||^9;8 .0mg< #T|gb‹ lcQ^s-c1 ).Zlx LqS\tny$X l.cϸG:n`2UAp;oOE!gh՞2# baJ;wq; <5=oLRP#R\Si>-6  bF`tZw'nyӘG Ck 62j9Q1|p'%5r89Waoӡz447e4%IPQ !}4 2\%A&Q{ 2J U"XQW\)E]%j59tuT`^bX!JH]%'rѤ'j5:tuTbԡ8AU-*ǢJTjީ\#RW`-F]Jq*QQWND8tȕXUVCWW@@SW/P]))G{U"WcQW@-Cu%+X.>eX|ͤHl KORzmigM]u=I4˵KJF#rnM e!v>NcyX؇UhwlG }rf_e/^l'B̢@4JҒS+cH1$Y%E#h:55~=(9I}:tF"T)cw={ƫ'T sqǏ^(U8X~ͩ^a6|k@S~jyIp fiY\TϬ}d}̮"fpdef.i2-&!J/rj|֋e`:*L-M 0uy]S)R2fd ,2M (` Eg4*D9`j/JqTQ( ȀA[R.k%%RHDcۍZ#,_~&)uC"O^}'} l]λࣃO15hf2yu Tȉ(g0r~tTs3c;d(@vŀ"*w8xl`eE̱d-2JF) H*xX xd&:km( (WK `vBL1jv̚ Ax EKڭGg-ٲ՜" Y^rZK\sMr˹Oquw& ˮސ}CςD^҈6ԟbzE8ER%9%pF4-@XD^DGVJj}- np]ZA353ב $GǤ"90AP+q2AI ~; Vy,  )#OHXS Vo'xrε:ۉGX=q>DyF %t$Ui_[6d9o)'*{G(6^u'iҡvX}/pp~fl=.էkYWKltSk&4I]}`)4-ݔó~*`cev=E?_\KESbL&W)V` $"a^HAESSSPWa_aYiC(XK|*% +,Hߝ~y0V!|u~|fa)=ugk66"W VzRzu6-~b"e.r6Gc L?$u {E09RUH-=S}wVݞ܌ksyWv!%1WK^T߮킣Xm x35Ci&Agb|LWMӐiifY> 0: V0bGBsMdmuM6>Rg-aFH8诘%>٨:X;}Ou\R)˛/>vt5NKO32 (1Ԁ J1St0-8P3,KGa>q2z&wۑ^3Nն,W8tΫZrCO՟ΉH)G҈MZ"Hyȱ{y> ;-b z;];ipĊW4v_or`}s̸*2(0LV ^m>hNH>nˮeW8 RaL1*H\'Z$c &ςr^8)YQ `sLq[ p F%`)MfpR9* bi=ρg:dG0Ű_N2'yG[A?7ҷ˛Ngʝ=SRTfnfGY-Gm6Zwroצ#=jl3o/:K3)Sҝ95*l]asjzxeGnhe-F[ۋ7H^5>,!+-$ܯhTgnhE=@EK@mzOR5h6>mRWmr-<@z;u-%>R"RܼvDd$ۡ+\T^δ,.gtG;~$HA`@xH֫Rz1R% '! j!&&YjhA@xAG#8DnyAZn Y*Lҋu0Io3Yqy]VC抓9 3r{h:`MJ1^}$*n$/qj<}⬯Ɠч Q̊**S* ˬD2+SAxG(ȣRbI /!10;0Kx4Wߒ.AJg)1sStRN R"_*.v!Ă2StHj9Va1=k1AgS]Ƿ=ӔUG'"BbDPn1 q@@ZC\&v9y`<Ӕt韦|C(w I-#]DRy%ۺE%s+!.f%M2eE(($aȊj*nuiϩC ї@"P>4$Xc>bNw,8CPk7Qj 'Ao]W5HĻ^ =|^MQ3-`)(#`} h8ł/S64ثc JgINA&P10+cBP4.VVڅjXH < R1:'&Z"$[icr: Dx;dl|EOW9Ymٕ |d*g,jRl <0pw,Yҗ]z-QWAuӀͷZ|e'j]xMRsv%+pF'@Nq%\#ҩI :-(hn"zr1gT?hI+5 F# 4<½xˍws Ls22͜K"x=r%9rQ1C&Qտ"Cwgiv܋{w˯lqotmVlq8Dm<v<$@Sm.J4Akk,'QI 2bmxI1m2)HWèMf;]Oy6`7VDi IUƖN^6:J) 0zt7:5qȞ[=VxTI6.1$д0<(@&6pgs(7'vV_W 0Mhf!C,PT.r;d!T‰gn}*/4zG 1%E=&og*mHuZ*4sқe"FƝqF[H KQ&$7\9癁yfНFey{39g h;8}Cdt%)=A.W]4\<|m]7H3\y! q )]Pe_^ C]`gp?3v0p=_w`Q2m]ߴ1=k:׍8_]4}_x0B_:$OaǓ('0hsa]qɋ\I|Ez*;;CZm66UZ~:o@Xc~C MVôR5}_u5w R:K~{C %|M)З c^昼$Pڪ{;r\R-%/Ҳwּʡ3a:XxR}<3GΓ@vϓ  ~loF7_..w%g?EdR_xXRon ^@_ܡV/??v\x{S3XMg}\&'[k>ף\f|fNF7UEWc6@_}+Nad7,ig_~"t2hVW_5?L_^o}ŒZF0,+&y·^gk?E<43WO߿lw)6G%f˾w,,ulq Ncuphf-@жSSYN<4dcjޮ#1^\}m}zWzxMP[ 3ӭ3=Mx_7PGzL9MMC{Eq]rNH>d6&#?1$_4}iADfqsJkJmq%Cᄁ<>@fms ȁ:zwN-yOPtܥjUE9ե0lo#1\Myv`o[\l);Qb8vk+:K{Lؐ(SR ਲ਼ \qLj#*)JyȰ2; I/$P\Fزv|R'Biã""yS6̉ P!=gK82N/kOI(G<5;r; . !IIL!ZEQKu:PwSz!T1HT4\ֹ#EJaC(Gea[;-Ou٧ .VI:D-TTY+8xP2/4'&˕e"/H- E ? ȟBԃ8s(L\IѾtmR9lIay?Nn+?ߞ K-''R wp6wf;laE;Y&V7jI5.T{zןJ^Ydt:(Oy[+5AFsO(㉂b,Q?9G8 Pt=k ɸ|>蔸U1Hn:ҞXw{rX\Tg)8B /\(?&#˜n"b7uAeA:{'mڬtC\Ĕй5Eu< /{zi8C&{6P>7nI%wr;"XF&J.jU;= <]L;ڴ׆kwv)2ԬϜqhHLdH؈6A2A*D4J A̡՛zpdȈ #,*5!/iK BdXwÚ/,c<XL?GdG<^%2" R"h?qNomJ^ J @dV-{fFTqDdg\p^h E=2iU\=uźs#"~:iɱ~_ܧƒJ'Q}$F4i H.xIp%0jBOӎc(q\؁CQ^yp5Il?yE"~|GяbCe$U5OW@$s+HsFVTm r-JPf>$VEZuLyRjsw٭"ALE8 wAb" ?ygLDJ7w:ؾP]Χ?F!5[٤x.ܜX!Ab>G)WBhk[* JPI(Px0+rҙ.m`dռ+&9}>MN~&k!B -ڂ,.~kURWJHĮP`` r 8tRJ +- .,\t&<$K~5K{v"J1ayic0YVK|X *?}SʉQqݗf&Rqh@H֫蘔` q"TIhi^0 I E9{Md^ *TG#u` `A牖4$O-mMn˫p;,rgkc;[fGasu9Sg/ :nY%1}m)~=Sރ*t|u1ĉcQY\y\u,@G IJ-kCto%bXvG(LC\:xwǣgn5ubFz$R!L`A舦cP", ;RlHL(4cAwxp̸EE1 w6 2nMY%>芛Ud \ZqMfԏ0v&֯n_>?EkY^⻬r[i@P?UpQ&KesxHIeKR~ǜCJ:~PRlfymŵeLȔ)VcӈH2.>0gY.'l}u~rY<)}OeƻRopx c2fyFc ;rŽ&裒_?YpWG9Q<7ElFjVQ Fb eDfr~Zvm+Ka.(?Vcخ8PF^˛ߞrp RbcK 6t knJ^Uey1Ѹ((rB)\g\O0?cgEd pvFiD@KydTrb8rT[+ {S:7j;[=tzўb$oɾ nݾo*Z` . Z'H( D "z ?{>cR!pI*H\$|W$ |s"J$<yZ>b1JlF451O hN2`T<G'ce~ؚ8;T-p|zU6N4v֕J9-Z[{C˔& fxQdaܤ-ܰ^f墨^wz';2~Am#?k~*u+x7f]v)75O1%2$eYqo>EH5AIٖh ztϜ9dnxdq6|UU4K7XWK`.dqR^׊N}hcsOփ '%&Im,5*OJjᬶ}bROpÓe8'97VֆeͲ%RҿT+Kq_ZKb nTA{mV/Or+g:kcw -i?Kn{]Í%H{L~ iSA_CumeU7]kl˦-.LLIvQ6abxII|4?~j>ׁo~t& `4ep<=ep1$_g~ge/pAzowg}Y\arz2/{ˌ#>ϬYń^ͷOǻj8qfR|Nt![X0#Lo҈۾˛OţC7G_'fkezp|C,雥p6>R{4]z?~~.P^_7?~:٤Ɓ޷1Mb%jzeϳS>6 NcEps5gx[b\˚Vm·(q_SޯK^ "ڡ6և;1Muhz񍗉~ \ |7JP)r27u4W)p?/cg)ec"^udfH&m4h>(4Ʊ3oVz+xɓ=40لznsy=VμyrO:sѸ;7O'٤z{lObCm tM9RZrzl-7.`;41u- 4Tm}u]3MdJG|ML2L "b2ZayVj'-GI- ݂iۉ!J"Tx`QҢE 0C DLB:F)xy@/ygEN{/8̌qa7$q[8G e24!̠LG夸qMN!c fTPN+S-RPd F6+BADmxpAj8p0b,#v͐ݦ9{Eޟ?ב NQPi n0k8RƴpKb%0)X%z@.*DO> 03& T)JQsG %%hmRI+ThɭOZG؋VM;KNQ"穾,K*'k#'8>j6{#32x5sf"TndfȘ/Vb!pEX}z"iZГ&͍_P`x=OYX46cƂ;TZC0Lh 6Jb$\YI8k+wU42hOM$& fܖh/#v@Ĵ.Dʌ̹/mAlޱ+j̨;";wi*dɟ*MVAj qg" k#88f!Cddю Xr0 >0GQAs?Ie5%sǮH3#"C;3`c/ ArY˽3&F!J [BrX5fEDf 7Bb( gN%6d҄ɤ\eFllQ<MsټdW\dquŰQ(x H+]ڪa띄j gR`M8@#8.ۂټcW<{bYUױ~B= 7б敘c3_|_.GqL)_&q^EFf씱63wo A!-t\V|i1H}[a ђk8"(lA#wet1549촿e+67[#2j@I I̳ (+8? ibR)lPA2ƃt[6$/)CFMͣ,* !4 x=yޛCHjf%HEZ6vl !fH ^ wE%K뫪߶˜-k3xMRJh-*OVNd'lcYК3c}͘7F.E]ttHrQW:ոB$IHKUJPm. }E7/)t7/f*BS$qswx6] g 4gf>Lpo)ˏ(XVj ~Vw+o>Ћ*zI ?l0e6x*]y,/gbple%fҺ:51oSNmΪek^hZLƗ |OWeWRzp@X)~uo #W^ 돦if0\ 0EA3XR[ᐭ n ҝ^+d^|I3%m.S;P({賦`V e($^ @6Wx>!RK.Ʊ` Bkb !KQEmvljjS(bOx1bU%<9aB*2yLJ!L>_j6]O.w=L*0m%41sTC1-G&9P,ڂ-T)H#d<{gW힚WGEuz޻,N{nt~kή;[p˽J-x_nYhi`MQ^|X6iy-քYm i pٛyiLeS}(i'귡KMz7\~^_6ʓ_`mvE3 X*g@13ootsZ v@B=vց"-BC*2{7)ox1+Ը"J+ -1ffu䈽 3z°ծ[o8g. SuE -4K /O &ldu~쵲2aDBP5;LjnDh:vF~Iܝ[A+tW@ۥ0Ja(BmpfL] }Td[Ƣc\g8Q32X0taE$<+2g5q|۱~8#Cn`k.PHx3IHj|># "s4‚Swyo{3)n&35IO@fJ%SIbb4+2C]7rp߅z D? u#G]p:B1 $+ ǨrL.RAD dT F)R}BqĊ"-1͑UbRWf~S k;_Ln ^mEOGg7>P#Ҏ2}$Zbm@2XJKeB#鴂N(4¥&PfTetLD刊`) N"->:X6s@faiqhW9Q󎼢,Yr>|2Pp-c̱b: rj/9vR0|%gS4awXC|)\xW2nF fvх 1L}2XQs)FǢF%jJJ;5P^*WRQDL \%kn;\%+++8[~9GW\Zp*YIxW_!\i%5GW`qKo4~E!hi/Nzü 9Yr6 |}1>5ys'IH/ZƩJj[+s!$+F[t΁z9)!K u}d S| mDiivJVCJ Vj/ "$x㈶զlR\=D@`1zҼXt):Q} ]&|J'{XMI1!Ku'Xg]!(:ўD }m.5ˎ0H%GW(_ 6XSN " ^TTlPtA[wZ 4@shRsV*A5yPyU벫ԕA[4AAN:2]:Xi65lD]`#)#khߤ=kyGD)J6NjA!NMFAꕶcu2R".5TuA"F_R]zFӘy}W@BM6y-d)-%VSMȲZPҰ*4]AՊXZI །Aڛ^̈KQEn"棊1͋BIJBm*y 7_{VߊkUе3[Xw ڦL[oT^ҧMJ)h+JUe ,äc*Jr z_fR(39՜ HDNW2^i C.k/AXżۆjbPb$Sa|]]l\`D>"CO&XVXktdDPHc ѣ.O9mI B*$*:աB-10 zv5Ԙ jYf}Jh v% Aڱ", "+P(vw5$BjF,F,Z{65^ PТ td!.h]`mBgRHQ-fWbd@BUq@ơ"K*jV,*&a!dE(3A68Ftbl,'J s nwB:iϢ;i$tZR0=UK mry7Xq=xzdhnNǴ'e\GIj0f$LC'Kac+i0GBg'QEŨն[SQ5(U/ yHY=yh4vM Ƥӄ_ ArR{R@k80)QC^":$5\Qr#bs"I'%\IbLvP%(H Ш3) @)w!kQMƮXBv߁yE!8ij)ڥA$\s; 9,RDyŰ aLO(ƈMnƢb$ӽİ]3+~x$mDVc-S{NB [HkҬUg($u21KCR 9Ok{iU[֨ʓcD]7}Ұ!lL/U}^=iy-xM0n7sfI24i`C9iyQٷQGfdi7] Z<S9SEBLAr8ҏe]{q=@GcEȲ5h",ҚZ.Vs4AUhc׺؞A:x}l4n7-z"9dkwfڲNK헓ŋt?V?毽?ofuX06qA@^7wF}E6ǯ B3 2UmQ=|#nP:5gialV{W9UAW{RK>W{Q6;fCWw>tEh/NJ癮BVa6tEp ]Ze:D (՜ׇj (GHWD9bN%kgS ZkWrk*G6K߷8 К/DWCoŠy]Ozi0qFtb.tEh)]Ju+\JY/w"0] ]ir=]iO1Hp ]ZPtute 9]`?"AͅmNW@ Ve:@=w8V̆s+Bľ9m=#`#CWW͆NWr1xti{*])=]~BW@:Ռp*нrjFtE ]"f Jp.='8Z9mV6QB#տ ]=t.͗/XIqpw8;yǡܷ+LW:h͌G1ƹЕ m}ʭ3ژbpY:;[-ǓmDl~mzuqryFlY3K?+^DSt) 1)3ZhAdG^.Om //<]vzu>O7ÿxG;rjcMv&!bl;1Fhw1F(b1vbL c6i~T)>*|>yHKU9Owu >1kECs[:+7S[W&\w8fcb.m=fM9/=vjJ:.ȳLG_7&u~}RC'j'6x'LӊkH/yQ:m.~l<'d8~叓OOşʣ)h*zGLQQgI}PYgRZTE[~y-'{q0CH! 1lHvasE,E*$_ I$R(Δx%3~}uuzZfx=R6C#iF*ϐT>ʥAGg>bs`K-g ܕc+AȤUMܤUYLX"Zw )%qbW CWQa覗FcX6V벾l_窯b)jNFLa—}sƼ' ?Uq'SW]B:>jmnqޮpۍ絖]B:N/!OʾέruCכ ƒ@' 0 0YcO5+av6?ײVFӀH͆ЌUodhtugf|6S Em _Fyqx W.ٖp0f/aw@B+C kqma4} oi2Kot]Cq$UeSi{ \c^{1p-dpr|@s`_E\ WY׃{IǺBJQ5BG\ Wyg%!\`<\\ ;RوJs"+,\\KDWpq5@\hAtA+\Z+Tcj4+&+k+Pk;@1v5D\9ׄpMj2P}tѺ$ gpY"''&%S3?Inz5Wo9\y Lߪs< ;Z\jG22dES!+dE ::Ym˦&6s\ vU0u ^v*lF\\߃#AW(**²VTjq5@\ p Wdpr+TTq* SNIW v*;RJ#+%rA55 Pu*}t+c\c9j _'2C]vLʖ [׹Bcyy.n*Dtru^i\ XxU?؞jMXB.6xd鬷jŤ bZ+x]lP鍊.]laPfdpr+Tk]q*= 2BHBB+,\ZLq*Eqg 6t\:oQg}LD\ W^7p5dpU%BRWR7+ײ݄AËcW!m'ɵǃz0gob\cƝ$+l"+뜡+P;̊J)"+l#+4\Z T:f"+i@tA\ڮ62*jR[f [:1*BWRJ@w qAjUJcWCĕPPP}ۑWg7W K:BS5wQWĕsqJPp秈5 P .jN2N)v5}W(H*Bq#o~œV3Zw#Ln׻ڃj֑u1W>W>ئ vtpr+P;P%W&hA*L6dpr]0};?,[Kɺ' u:0TѺ"L-1A3uHAԊx~<ŌAO&QPuL}Ne=]LŽn5#b\ȸبV~U*]ڃMHiC{t0P% j]WgE,P.զ@Bt@_ WW4=LHIW(s.CW* V;BB]od5\Zћ* $WJ+BP{\OWV޺B$[BBʵdATڶq*"`FS]`+ĮPY ;Ggp27O)v2rAT{+Py WVhw^++ tB W22 U lZ1BqA W+L |qNv#P \E\\M'Lji$XI1vjSy 'Le^G\\Z [NW(1*B^WR1q5@\ 饒p"++aj}`Yp%32` W(W*B 42jR0'+l +%\Z+Tٳ"΄+΅^HTa~|X#mLD%UBP75ճ) ,Ax tqvt=[֜Z؜3$ΔKle2Y@ƒqQ%DF}{3gq wP=ZNW֙ UF\ W`1R t+f^B"WNK%-!\`Og{0]zG\ W^km) FǺB\S(7vJ+ѲńilGJ k\[WajUGllDDձMT@c PPTpj;PWĕ0꼡 z JAWV U!J#%\` j2B >jRYC)vdpr+T]q*M] WƁuc~̡@KjN@hMͅNz=&YgWǽ޿N@E}Y}r;ͫm^ݝz_P]_*ȫWl}lU?l"?tIuC_=@ׯPlUHxӗ?X}G5w_f$~їjt;;\Vb,ScQ_ggᲚ6 Sel>ūu%SPa~/Wn^nwv7,/`"X̶ ]sU,?O"ɲm bLpx%/{v[/r5\7Iw݌)~ߡVz]OOPߵJwsdV!EʗE~9LY2HOJd{{4`i6ɂ{x-.G?Pz}N_z?Zܮl7ZVͻ}.(+)Э.G_5͏FW07S(Jyg[}JXg3ocBִqrL|&6 *inqsI4UcB-H>ef+IY -h01~^.bgFw{\2vЖ xs)u}Ozo.kl]ו㬯w '[N @4乵J%(6)KW2ᩑ4Q')3Vfgek}V M<5R$>)rVjRqS*t&XSsY)¿d)xy`X{c]5{d\}`[-S=5¸LSM/Z<5԰0K I fQU'~5^i聥g">UhF(t|l|DS39Yt^~,]9)JLΤ+˓qjg&Wҹ<vf3jg2c%ILeF.L8c˄N _dZxY>D^KyӜg`zSTH'>Utf̙D%kvYP3!c[wM+(QK,g ǔ9J):U-rd^ii̤Nh n{Qp!X.S2u>SiR+uar-in=,W'O&yk~?ѽQ!V3T20V=\J>~=Χǯ5Njr-L[~18jg)2KpkLZ .&:ME`੒妴ٲZx̘8ݮz`͌e^,LiLXSk,3e*$t^,T$0 #5J/ąǪC_̋zO% ^.f~ V`mS9rY NrV&r-e.( E8lE7B;xx55H\jIxƵLf6Ϲ/m$IVcTI\^9&_ātځthY^S{]fhE\ϽOS(:|1"X_7"X@ѿXd" (]b f5z_*T%\{$ٽ$SFIu 5 N1LLn5*$UqξMVk;=|ʓ"M M-|+W0ݨT4 ?d|/7f!oɦfmsDwGcSwzzdGnhMR]іoa)X~gJU=G{e\=/0# R_]_m>i#m}O%::ǫnFudY1^1vI {']z8'>Y~E(AEٷmE9*\ zS{BzZm+\(cyܮ_#|GzcM];Uf4F??{ƍl mcx~o&e'u6r)qM I9V\1R!i (yR-q@Lqpݸ<|ӟ!~|jx6O<⯰d;ܞ^c4o3G>)Nw_}{ww^z 43&%ww֭}oH[u~[Cɧn|꒏rܼ!KB !9nD/\Ԓ̲qlc*>q4|qNFw`]W C`x #њ Y?Kǔl.Lv8"$m֑D)KȆuZuF}{GHXSCC@ZW;nByk}Ÿ'3jA7&ŤP"XK%`2| ("xK mBلۺn}cN\s-"8wwg=l$}VԁZ-v[Cn2/4|Mi#b\2\E4Љ C%8/Ad>񠙡&(őP[fqT9* Y@/YL4 1q<|"V3 Pg#N e~88ϊPvRzQ$G'7Oxv|r+h-2yv0sw)`W4h/7PR;Xb;+mg<}Sܾ9?` } ʠR؜A@Y!HGY0]s|iR!pI*H\$+)DbP pڳ(6h18@~uP@s!)= F8a1qۙaSe۲;yޯ\܎3A]g]y_fk׷pQ%;r2gUv&mg2f޾hZ:͵\+$Njd3Zu@jtۀxmlݟ߻PAl/[nm@>(Y\)(YZ,(_(QYCqWX8% 8%xHhe+C[0ȱYcKt,(w<e>aG6{ph"Ih:`oJpjubEHx}#rW֑ʀ9D1q]6 &eF2 ,Z#ՁHOŽblHpW~CyTcw7'z'}?gJ-! >$mtTtSy2]~i48-#ǣ &:ĠFSOp6[HFߓhY_l`dN-ZqdhPEJ[v}YyDBK%@AzoWNuVg.cĬOsibgo9,vo߼btp.| y}ݵCd/w|,h|\ ?3+iz7˵C?,.a_WW_7 Lי70χ㓣Y)>.?UYp뫭4?KOo̙򇗯3MHyM/sTc/}_/~7Y+;ygZ}Mq.uvƷy'Q}C-͔ߖ20_=zy Ed9h֦QmA;G̥bglr1~p1uҀ W]fɞot/\q??S-Ocd$O/~/dX Cڜl*nUKeu⡺<ٳLC[f2=6s@6$^o]8{O:WѸM=dS[h-j ,:6?[g788I}{e^֝~%Oruk`,hsU X_*ý /&JUb]~[ |rʲ?_e*'Q :g(YTZˬ@ƛh(˅%`AI(뎋_J(R>)  2O}@1.@)E/*,v"37▥'?L֠A2} IQł68۬Jᑒ鼇`A6JSbB; /=5OÐ=yeRy Nm *mG%02W0bg3byX21iǮ 6Pck45 -ԬN0c44Lhm(%C2ިxT;-(̐ZўGX"EDX$Ѩ-ٌQ?x(XL?vED^y"ܢ O񿤣Hlj?qNomJ^ K`:5! mXC#EDfHhG$(ZR9DRţ-i*WϜˆXL-Ψ>d?~HbZ+.Ba\;\\SOJvH&8x# u+$>8$H3<BŴcW<P솇{>m'u@ .5]>SD&+G"t8>yB氚wHt<7 AZ 1cCQ=FDP"w=\Tl7j-h1*EC ,8IŒFss es':Z,5HsPw,c).em.Pb(Put"M"dL{Dk3BLvyꀈM"N'Nvr~bo7V3jcݝ0F$Z)V蚿uJt*)+!Nŕ Uj]BH墶ƘS֥0,w/Bk"*OV><"1qi!hD6 (NDtB> ,9qlhGZ5  4q0;cDOZ#U!2n=m8[uۘ)u e[jGvm=)t~@h{q,r[jIűcGĎ>?;ꏦY}&qsqZF7bZ ,qU$-L7}uhWJD(tUT"g(xCp ~[6 Aznl $ 2$gSG'?TR1q6S(=n}ޔ;{`A^;@ԇShqs$@J J+NG !Њ*@ 2**o %:"%%ZRY"r@eA%Z- LxYvjދ7|N6Df#\v1b;Mߛ/X]y$hO(uƢF SQ-U]Qq2賞gkp1DauO`2AN=͚v} .{sۇ=9!U׀&_MتqKe8箂gJ{㺱_)srI^eAc`fi4j˒[x簪ZO*"@l#/=L/ˑR)Z<{ccKӫGvŧ7$ W6ξB}}xrֆs> J[p_w輻U[G_>}ֆOM \yg-u#ŧ7`qύ9!mbqh WTׯ烷&jayawSݗ~JOԽi5l[4_Lҝ[\HI|lEi=ry Nt`ay9q,{aI%""#c*oL2PZ\}\9((vy:h1f}p4S&ӄM,ݼu{^pLsyq\#l'zKM(M6he]AM=_s,q|=x8oy=$hy&h£M#ZfExi̒]&\\Z_87/2ݸɡT6񗁵s.<JM.xW[ؙ2JaLA3E1rt<_F70SC?s>uhu|\jP kY WU% s5Vy3ÿVzū{Fᅕ* C&!}e$u#tjp܆nfTDd1avjz?b6 v黋5#;' XbGiѢ_Ư߿\ ?A}t|؈?û 9myěq..*ݒВZe,B$pԀӜangsT5Xb*?W0VwVYvi-Ŧp ayq^w0ept;~{&fX Ӑtrn:ksGJdz^MEŤSEWpoI`+~R{Qs*6G:[ y;4AD؀k®L d}"ltV&žD;5o?Gܨ@kxjAU 1(!ՀK;qu[qۡ+{Ǯ/kս ODWCvѕ]YvfRvX{p ]Qvj4Jҕt!w׫QWcKׯ3+kvI] ]U+t5>s̯Kortk>u5b3?kʐ97]8Bzw~7<>9<r&q ~qk绘RާwyWfke^a^2KCzzqp}q_U&jW녎ψ{EP< ¸&>=[Fq"9Gys\hqBxX^F{] 'no3Sҙ9nU9d;rܛlkr}x^}b^@߬Yϛcn_]mDG׽c$GIcum8?yuNR']q)N',i"]]dͅzMN47jA u*֪Trqƅs5Z3<'jAUC }y37鴺kRd mh-58U͵3&j6j ]^u9E>d'Ѣih龁\߽pI5K;Wj6և*P55@n®iJM#$B"Smw T0эИ66{PtPQc/9%C ~nh`FaޭNܲmc6J\u{ІlNNr&S$s*0ȿ-!x\:4춡Cq7[t # A+ 4⢊@~޶˷8p*gqx@1#3X2 PHxwޜǨ*:;ySSIT9ՐJJ uNF}ZcyN>>uڈҙݜ`DmT/  ںk3.m|"FUi,ƓΚ-26$Me{6XMɂ1s#OutAڲZnB؄ўF Cm.5뎼0Q) _f E US`ّ,ܢ ^ TTlPtA[B Z 㔡9xCyۦ;CQ0k0(%B!VϾ${hluXBc+}S`1WuV6 MK ͑ձMk +3\oe۸0`T` HJƬG6ƎC*D@.Ԏ]SPPB+RO)]b0'؎vo+]6Ajc6C`6/(8\d V͡pskjlU %;QBnTzCŭdܠQ 1(_ c#xj  f1rոj0 !Ѿ)tʚ9&dYk|D dDQ4lr/ZQ+Z`3&M ԙ샋A9+tg8f:E$' 7**[PeH; 'Q"_t {0g/lZDb9Ȅ6PMRuY+ QHk"m9t/#O `d1 ]{ # %DA!Ay1k"!}ƣB- 1sGPGmwu $$¸ jcYeCJh B;ےڱ",ut R+f)4DG7ft roێաgryt'T]1)YG1n4XdЙ$ fj"ٗ 6fȃR!*Q;SQõj `QyU1gUR(3A78ˤB A1*'JZ B `!eќmhFFP4 jxVU*"> 2߬n$F ,/>v@lTLCFjh|;<-okǴ{sLV PwM!~+y 6Ib@ظ:4%iPIfkЬ5qԦO'$e`qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8(  'Pwg@@KN tr@t3H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Ne.9qQohz'PԤX@ DG $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN d!'jv ^h5q۝@%yq=G'V5qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8'ЗKQߺ>_/~>KM]?{ƍn\`wlhѢbnXQ5b8##[%[fIŞC,ׇł/ ._T{.gOBtQ,U{6.\[qio\۸tnW=P˪T+Ki}()q ?E ōX"`GYUf▱@!f' ݆ZUEJsaט"ڊU\]jv`2ŲBQ3:`@{|#,`3@H,$kAˠ\BRDC`eȄ+2ogD>-L=Sq ௲.`.dYŃ$دlku24A!uDfhic6ߘaL>PWO]:UurݺqYVf )GQcͤӏg]T;f9$-.vB+Q]섲vrb-5t%D.-th)M+@ɱJ2T&00'UB?PnJmrUƭ+(-t0n:]JRtJ Et5e-=cWJWwcW]J_]Xyd.=-ǡP6d{U9j Jp%k ]%J6J;:A"&u)n]%ZxuPN]"]Q-XUh[ jxJ( q%E*֪5tpn ]%5J;:A׫tKJpq|/X3ĝ:E*O-+(m ]%5cWMBNW e73xt%_?=(]%=trJheUB:uutƭRW ':6$JhU*ԝ:Eo]`ڣ\Jhh:]%oU(uuW{>}rB<3گ>ҶB)k=V#AWV=i"Ut[*tP6m쪣g+0mB [CWW{^hj<]J;:Aq[DWTqLIk芦p[ c]رJtGW'HWLTv ;iE||a$x^Kj x^&KuwwNbͻ:g/3wfږ ;Yf}aT6 fYf\b\T[\l@+h.'au.\$ڴ<cJpIk*eH(JSۤ5t%t:-th׾t()햇"]IHhTWfRxuPny)ҕF˳\gВc˳J:fJXl}Sm?쇖XPmm{ЕU ؉`ZCW m+@n:]%XvtutE`I[CW m+@QUB4gg+dmr`Jp5i ]ZD*ļ+)n]%=cW n=P~(1JPnO4Q~^/ Ͷy(P8cA84Rbpp h|*EmMib#3j&Jr-'"r;!;[5*er_ gPyd>/c}R)>ob/hH)٪AV]J-s௹/]6Ѫ*_x3pgۂn{y½[(l?.Pxc*[,Ϲ==]|d˽KiWY5y0%,ʖOxV|X%]̆[rfr^ge՛edn"Q=Guݴ)WgdW2KC8C!fH ^ wE%KʮSy E>4iG1<7—Ba*xP@y5+|wj\>׳7]J2{36K~~ǧ4~m(໿c-V}U7O]X *޲zZ!YQTWЭz6bTe5X\ʇQ9%[Qz]"923gEpZLӫ ƹeGFf& BA-gkxXl:_OU:}07vŶ6u*zb(Rf ?6J:rȧ3aɟOZ}#w mN,8 @!lRa"s-=;|>{Q" zlWa:=x`V/%e2 g=aֻrXM7)|[!s JsU?@ʏAB f2599`bZ3cJ`2ml: 4vi`u۰ֽhmRo/^:fmu8Uo: ˁb{)32̅EFj[AqQ0HhUs>q]U=QQ1G EέRȰ,A[R.k%#RHD#* 疰ܠ5ޠj"▸Bw*-m5щՃuwh%tfTK݋{fWCrX2PjkBqZWIU=Y@+5:o$= D閦o`֕+>:yɑs(Y2.TA1^WjәTt&DGqUdV09Uʀ u:әMՙHH :SpVyE#{;DK":SGC KUl=N;ZDl ha[0 ^x`y08/r6,_-?k]{]`&s) Ѕ}]>TvR꺀QK#UPkR95Ky\)ڲR'uR.]|(z kVYS}lA\:O)dDaaeXFIvy윱&{=P@Du=< }٧mmzY FbtʂF]f:UR@88MuNB!a 4&LI(8FҎYSqLyh R! فu;o_5U eеsi˖lNK}zpi4a(znvP-Y`8W{n0Fw?w?~>߾כw?^&?^_z8mѓ!4 $=i4L]fwIW2[vFqB`۸pyܖwu^X[nk$<CH.J,3PN-mc v;GJQ&:X?5Oy> %vL| &QDQT]ҭeKIA0N6Dd%0lܡ7 2 8q#WR@n;oq7ȹZ9GϽ _! ZgG#Se!MYh9gXt wvk1M9uA@<lMŮ Avqf?l4l8c<,ĚLf̤$@CH1yttLwJCwJCCOiȇRI3(} Ȇtt:vGA`BB*C>X1wx-#у?j 5[!-ևN߂FϦ| &6_uyRuŲֶ蹵;D"0bFY$mg҆nQY'"]Њt.&H Bg Us\<*{l,#ֺ{o?b1Bo<'W'ylE7;>7aT퐩{|Ov= 1V2 >3yN7tg<5 vHzs|#DIzq5F:?{ƭd K]Ghʵ\guɧlʅ߷1|zP)PI%43`ӦB= 75>;(OPH1FkʷHQs2Xܳ1@f3ˆ%A(vrAYe(u3Qk@PR}L.h3V "9cd:0t.3[93QMkf&F4p4i KIu{we[.ģ0^,~]x~ᥛ Zji?Z&"C%8Mdg)5uR;2&,86EgjC w "H (Z`1Pc[Yj-+U+rRr&[թJ?ɓc]|;芫%*NνOv]^TO&":% 騝dEEUjźJ3Mb|ynl.h1X{D03[0Ch1i:䩂<]w_~L7{Z_c[=hM0ڑk,i4Ɇȴ 'f.wm!H`uV_귐s5Ӯ Qq(UB}ST;;W 8/񹾐 ^ xv[ЉYCWE"[d6"e9kabDI%`+wp:d)2~M2K82I_q?})7-1x7')0Za~h,o?O M1ewܟiQ+Ok~eRـA^'g4%{| J| K/MOwSγ>.?}/dKz+zi@=,—?Ja2O/vHaz?bkϥrzo߼% tsY4m-n=Wiɐγf~T:H^Wht/N)ԅמpKQ*\_Ԡ>/}I)ۣV'{~qGhE˼I2Wݸw8/j:f>N7Mo}}t >p(W%b5-zo"䛠"WP<_vUw),f6mpO/ۯcew]10cr>6,, ^.b +`8:0C 7,lO|#ˊΡkD[܌w}HIfOomĆ\n$v6GUȭ"n"wOC)d#xBnV[GrMo(}J;\ňŘx9f88'AAL'3&oG)oQo8"S$K$fɠ6Y[/l)QRe(hKPjJ/W IB/D!`T*zcʆZM 5ZfZr7pjzQMhk>֣=v{**_5K<&+e42FKјRhUiʨ} )U^o?yP90fAO P/4# Τs.HFj\|jXXM3BaSlHsNO^@ty&^и~8L`6.ץxn&.[/HZfwqK4)E2$IVS mO}iÒiȶdЁ$(/@'#BgV_;?'s8:] ` sfs^M~N޽fg^٭_loLGG)Eoǣ ztxicxtB"8| \fS|]n#!"$llhڟ Y"iJx5^TH.VBkn/oe[T (ɓ1"7*F#cƂ7<3('C>խDK;'9 |Dlb,@o[5(cQG<71 |Y-qC =ɪ˩[,hJEn&J\ŏs9+1ܕcQ0Gp8)3n\% .Q4A| 4U߹DD\"q.=HFQ{b&U6[p4r%!zd\`xK[*BJ傴.Em\+N X[D Yxo?aa2צ&.Yz}W>ҥ>czJ xA?ߑt$ع>wX%m3 g:24þ;J.*8Bx#-/}8oI&8z,jjmbQc[ĢƶE-e#< dQ+ y$CW! 7ҫ3DzM$=.߻ R+dug7yg5&[GFSp)Dxǡi ≢ϹСnQ3z]qqg^0GԫnaR|_tĮRl.m 1 »fl>E+jl{4YY# 8))31kg/jCӳ=*^Ѹpr;>j'ٙ:Psur+Gi /j(t*N\TH }Ko}e7ʤYzԖ?NEAիߏ >+/b^jW=?Jٕ[G7/\~-9pvD{NdyZwi @j27KtyTp4:҅&o7$kY0+Ț&=(P ;mJ3ͺcYxrwn1(\⚱msjL./q-ޭamz-z/A;nrr Pk^Uet685b$u].Fnƒ44ef ʡIV6ޯ32'(H!7mB>M.yPusu>.i0b R2BmȦs 3GRFإ뤋k.fZ='B1S>M>-+,S%C=e3|ݑGb9>s2 f h h:|hҊbs=:|Fjf'_AH9J&\93a5g2t]EjϪTGfB)[ܕD sNFt^;O4tר\T}]L鲣8cO$A4ז`Hn9{\eO:YN_gwIdUpl.d sr!TХIḰn*VM-B?!bܽ|MK.Rs~7E\ g֨mg,RX?!'jqZڪpUG$E\m \il;\ GWV0N?/]do( ;%~l!d_Mo:[ozFƆRD5>;(J^d`,M?vS-jzZ,Oƅ)h6II IzI 9b&UkyjU%?[ˁ{AMВxĖk=hKȬ0H9O.䒠ꔃh84ЗDL[%j}5uJEYW/ 7ON,jAQ?\ 7zњmvz,IFa]vZI%`'ze,yZaSц朳;$u7} 0qʷo^ n^k@:?A#?zݿ&]GouWhԛGt;&C꼛X4b<|6a/i%i35_nOOu|7ōx7=ۭ !;o~F}'_WŐK7tC8Dc4%oFoׯҎ^?<{syыׯvML <~uE/ K?SOt۫_[Frѵm1AλˋͿBCOupn) ^SOhKutC:֎ gP:.ͯKHϏr/ZoQr@mJq,^KxU.1v&bߋt$Vs1φN3}%f&zf]/yXUY-1[ ln^l׳#ʋ>%gYghS\t* 43D{Gz=cs+ zeB<лhċ4;kx/ F)lbGS1Lw?4' `٩XYe5|P_}.4.%(/@BLzɵ` "JdcM4e0ݪ:XJe6s")5 2AI%&ɀtF̈́OyMZCY 3ւunW6s{9NF`RvΉ=Tc OVFm)䔳h4>Nt}wBEF wkUB @7{ҭ)r26I MV'WWP^}5{A!hŘdU!2@,:dĂIؠ 8<Ƭ"+:vt GYx*|jee([Zĝ-!)me\zǶ2K6";H~Bxf*ПmZm\1 :sa{-)48#4ӌԩ48f%-X'cxLQD$f gӀH[^ɬ1d#VTmòlSO&za]J[-9)SμBI';Htlf*fɔN9 -) , YbL -RUyA2R{IxwhomFw6Gd:?ZQdB8W]xr㼵 i0Yߠ"c*E)X{CG hxf6P^zhCtF%Kk9 0Z\b,@ЫEl1@VG 25{gZ3H/c}hʤ2WyPɾ[ir`r|xsmmFwĮc9] E|F}I=u^EC$,8keaE20qj!ѡH<$ A"9D2%M2$ʈNd.g%m2`>0guCQR\H|ΧȲRXLPWe `#/ŅB vIj֭ 蛻v^Bɗ? e|"ylOMc }b>Y`֢t}/5\44Bo\>l[xnf&Fd--O;*j$yIDȣPAbb&YPb6Dnץj7Bfˋ7~{ڬy.xo|">^n 58GI"U4\JQ xk&Xr.t{OQWzT սezTU߻yK]wqWs<ʵ jGVE\Qpٌ'}/щqK2k؀w)20-i={MRD뭑hԹw{Ev&o;lŷ~6wR{'`.uABO:⡭W|K)xrX?mmcYLnssz1W5I\Ngt'Vo{gs׿m ,s#c='}Zƺۮvӛn!_Knl,-+qwi=Z+u+g3wDϫM6h$kGeOn TaYseܗGǨSfe7a&KC\>zv#_PC&'Z˘@I+hH1E2.xkfK%nŒvږ<ڨ9i ief G"YU(לK}VgdNP4Cn 0נBdJ>&ٖټ(]z>:pOr6Bhm2RH]Hs [peK:g̱uqߕ)̴}Mz`”z[.[Y7+OC=m+| R*̭}>翛?j灡sY54 4^JdsxmD{~I kz0F#5s;7Wgn%%W^L3ښumDvdUË?g{K{ -kJlC~=>uG\m}zyㅳ%xc#&fbȭ319-Wihss$x z| 1BKTtkFq޲t{Ws~` eO,HǞ |Կ pЭQMEx@lHb㌂&X+JNW-|{SYJ8p6z;. /o YO!MœoYJ[HDJ ;c Ty .ݤTL`Af]"C,4LCuȟC^,y4ȼqIQKu` ,)cT}r[JB͊~!.QyԔl]zGb=wG 1 bu2O A6RB6:ǐȴ&{!++&Ӂ3eu+셋U"O#WNgBF"f,E-seZ\m:/@W9W]>KwG~L,븝_?z)}{W-ɞ^?B$_}LgA;Lwl< khL0( t80 bM(1He2DSB('m#9] #Q}vvE]HK$MRޟO8I<d[awuwUu 7ƃe 5. QxLH2ug/j-}9*|a\S;Z cY;N4*jwb r2\*+c A"A&-qJqkC'_u`^{Cz/L.gJFɃLAXHZQ}"!DEq /=%؋axjbS,B8gx$tR)8S`>ĄJ|c9֢kϷWc]Mu ;ϥo58I:? Y!a4"`F?'N\fOJSB %xOq:le+R@6NF 儝{%9?QvW_OGcdקg5jT7ZOHt6#P(\Fn!]1mc(|Jڷ!Rw!~ lH;{>m:^ɉ^XXǧ XbY>)gYS.)ʥ +. Ë(Ubw5ML^fF4s˛W`1~UdoY풡Q͸zxui{ͽs j&+g~iX4UN,oCK;w1FsvMӪrVF6:[dլkUBш3 D>}3B:"~FOi?qy,Ω~qa9^x܆ 6l_w f#N}{^O"<qN6,U"1Yb7y%_~śW~~_7o/(/. 8T%{{pg^VjFM|UO7μ%W^JQM ޸ܻ7:FRnlY*)ud>T8F7/t* CY®#@'F5^c~hG!HVd,! *HҖjHrkMQU=ud Կ=:Pڵv'|P{8VTcRL %ŽT &)CdEY[b( Ոj7=ѷ޺a1X1fjDh%x;%:P2֢szx+ }"D+rG@ULxDP8(g@)_DAMعjlNbPʚ rԦs #b\2E4Љ CP20&&K 1*@ۇx#@C jB Nj2Q1xΒj} ㍁]LB8?<ާ>(jk}3EMֻ}{o1@҈&>ȨHbv94W{P*[mxRjJz_~e}e el "Ů[#߶7j6=D{5\ h6g ,䤘 `ڴ6@ӞBùj*H\$+)DbP gQ4h18@4x뜡$CFJC5rL6,6NMδ3{ҟ[u a_7K]/TJ*乕54:"I[e5e-\`\z@nmK~1YEOSi4GTJ1,єR,RJ1C١RP.7iK)~7kXaW,ÙcaWZή2ݫ[vݰ+UaWGbWڇ]gW%eW!X+7$ɻ<r߶2[7jpL3CoL}_adHl!evb yx>{- [X?]jft=P{ xVՃYE 7tO \&sw|篝ч`Mf=5M3V RT_^s6ZD OL3Eb9̩:eAL{M g\~dcjAy妟,wI!9զdd9:8eVi}SP 9)P#Kh:Z*ը82:,E[3ja ,ǎ]t C7 n0`$wiRe-ޔH'v~bgEP3PBSʏhxhW VC3F<;9B GîʠaWZv %MKGv %U'0h"m2:P.I-~ؕ!Rci 5G5 jJ[v=+NUYYaZS3QFK|X|\/*fRpSo)bV ̋ @!Ca|ke , <Ȣ;NY$XK],%~3iAO?.{ g4Ki鞪[-9IB;DrFgVBZM|}@QZ,x [Uc/ ^/_O~0̹su7_۱nߝ܇9^r^˥I!hv`"OyRTz @8OHJEaY= !nP;G ?OVːL řޝLޮT|m)>wؒ|0*.t9E^f f<Śm phcl\vpطid K1{u`3Dz, js@$Cɨ7'ه B\} rmO:ev8f4::?a3yju.U'ʫ-sժ64뮳K>ڥxCeWwֵ_W핳(jkM'F ,ji.0Θ a*>@z8l#&8ii}EB)yH9&`CPOmYq÷z:k[DJv!(Ўó[O&pyI"ɐfڃvjph(Q&GsBP\!Y%I4Q SaMȫ)4څ,O N봌ga.[kҬۡU_NYi>^e"L2Č!HTt.RGQYI!s߽T5JPmd43h/exLhR /EJ؆)1pSZ9qnĕ*HgGK_mMN:Ť)T^jMjfO '8~Yxl_>­46l³6v ,*mE cG(\i ZNV]Kߣ\n| Ѹz3۠s*!@01<]?l?2q*F*#VH`Nrd$yk1 PUKzq.@N%eL* <#WĜSÔT'ְ1p{C+xɽP>*7tZ#8 ':]Ո7i4b.$o^\m6`!*v$7PngT,N,v!|.x!];R4ufl7P9vKĤUmm]"3N਷"MFyv %NEDlT$)/uKiAlOMNy4.jgJӞb p`78%CFs% /+e hx;L6yYh}XM^Č3ۆ. v]yU(_D% &*Jq&5pAR-s~.S.E#:W-6F`tvFd\S1I*`)Rhc$c(Q ސ$LB#ՠ Q)C%cTZJ97LgSJ lVwǠ:$ A(o}LހW JAh^81@I20oNU. N`[h>=2|C`EPbPV(JiB偂6xX RfH"VjNVOj9>!_h]KoDrC)%;ɉSRKXn\xvm8-hqf'ۨ\(8@KօϮY׃2F"ux{%Z9uFh%޵q+e/=7k0I7IeI#}a{e^rAlK\rȝ)1訜4T@@{e0Da$2ce'mmq?sxѓXZ>V &VcY`{ \$01H5FBP0)=a;hO<`,wHGkrS-KAg4:߷KAV*"yQ$,J|vHMل#P;lBqI?g0rPR5eٽO.?.qH? (a$F82H f:UR@88Muv]rE؄ A2 R QǬ)PR zqy6:o C:|w?dX? %'BOF˗rSV,ds&/u=*0eh޵qks L!a|Re{![SU:+eFZ,׽/,n1c(顢Ҥ >Y߁b5m%*0wHsIM~l\pPk߷QeZLN2d9EJ#h'\<*8qP+u^5w_%HP<%( o.70N& XQ nMWn'lE=xߪ N^VW`~zgV^WmрmP9p[JL/)"K8sjm={~DWE&z)y 0#LE̵Âi@Yzcq;cOKI0Na6Dd%0l\[#z:q^$1 s6 ?y .ٶ3N|{x tyd:2%\`AAH- iҨw`G$.3VŢVBo;#Ka-oUXj8\Nڴ+(v%oɾQDq~Yn6rk /"@sf P9CH1yttLCfـ%!ţ`RE9HH^Z-†y[Ň2fRVa wZ)2b=أP) U5r6)uy۰;9襣f|>?ތҾP:5๕T8Oy|2 gy"Mڢ΄ iYx󪇣iQggYnޔȵM4$ T멚bxr(Ct^^_= `rM2]Z;EP2yŭWnx/ռ(|}U2 j^*Rz}ބ"a|,[Tz#Ɯ5E7uh._-M/wꒋl#ydK[gߓm+mLcLy,# f;r&ɭ`9-UHW^HJپܙq'0UU't[xHԔPD b Fрb{D[ECYU⻞N먈q´"DR&1FYC4WLD&:i% :OJ9o{w.Sۑ9^=erwцoG%YP sEA@(Kd68P 9sɗ,ytPlW \(Å4 m0b{WIhUtjj.[NHpA:ZܛϮB7TnѯmyxI1yge՝wqoP7Pn6*n3x0)4={*+uEP{a7fxNj: KQ)?/΂y 9t$hs~9CZh $jڎ'` ɖV (='a\݇KGW!po!L{)@\Ǔ^bpN59 3ךJ0n&D"ww􍬚N{)~ԼBg ޛj޻>Ly?rc p$B$SjV)n1J518c1&µ#*ya^V80e;*5W6Kanft=ٮedV`ZˤPڱdJhDέwvhސ;SW-)8p}MO٠4d\С0]\U;]5 DdE㈢DX,Ry'& 9iR"QwsPhPy 7|{v=/wlrO_Ln{F+ HqD=H2 #q4 PD#$^ظS bŀ 1RYCN+6:e)FHdQkNտVVºt 7GN;T8Uwum>m(ŕxhk[V<ħ@ Sm64^) CA 5F C1-I3QsG]p<=hT=4R;gq̬r(yHj$aKxHE'Qiikxynt=k/LtH) ȸ)v08h9y̚ʍߡ`:oM 7*hMG!*PԶe׳XٜȖRJ< pUrpJq(S{ù8Z, tV+$5$padbVz5~XqZ8tvĂHq2c$H"K(z%,0,jfY?|~p8p0WSMh`0/=ށ }p6'ǜ=^axjoM?&*xcFEB&/55)U*{?6h/l(X4>_ނRayh!/~0هhfߕ~SjfgʥC7oA]/>dO菒=L`O{__؃EW=94(0꣍^_GӥWMK58M_"0T1(+?Q7$km#GOL n&5 >mmdIG2^[ۑe˔- $vIV񰊬:Ίf"r{^m-*/Ed{ٴ Wq"W_C]?6YY!E4Z\zϊv./<7l1V^˃8/;vczpPxН=<.p0X_«Fva,O&7cP:]A^b(hpglpG²UӉ |}+P"D:< #(Ӯ/^,NUHK,TXVLB'^x=%ZW䠻XV+*Y)$g=)%Q$!D R/&ȭ  E}H}ctWe ։p ڝT`8rӫ'fCBS-y>7+sj:3lM@SmE4@j\8{ kj{I橦s?!躡mM%b鯕 b%ìsUt}9i>5G/&E(q<ҙ Gߦ▻%]񂮕A۴~\I95Ϛ_uNȱ#lz"`80Gi#G@k(301[,Q*e`֋,ϩxWs_Ñ n!J!*(RiDEs^0a@tR&`G$.;(ll|<-@l Q/yzT h;W;AWi|az ]j?QoAH`Yxu*x\*LQĊY\D;L;3ɸ9.6&+BADmxpAj8YoaLG eZA#`2bs5 o\3zJޥ=qɿ#e*8EAq 21YJHˬ kKYʮ XQ$& ݎ ,"-,0vGEᚧl!?cPNJyR"rQ՝5*,IÓ]RNXOF'Aw4a_' $9Dikت* z< FVTF"XR ICu[I%B{N ;"*( A)n`P>RLzSNq飶)8߫ ,3r#cG|J6,&b!nAaGiliҌWlsօibu㓱krJ ~#Gl%+ T2 „M WVN`Zż  ^C2'sn` Kl;@@YbZD"rRN;~&6ɌڸEV: u,$όzq&Z+4Ӂ3GFJ (s0u< iU3A3 hG9AQN7{hdlYQ?H+M>6EDi-"^iS+ "$f XQX;cbtN!$8PHi l($pzTXҠK0X#ֱ̈; 3pq<ߥlR).̸Z\lq*(] )mUiNc3)Āb٤cSw^f~?h#l+Bѫ'{3)ZFGSa>1Nja+dZ```S[D6"xiƑٔGS ƼFHdQk&ȳ߱;eɔ6'G6׋OGl4>%6 )RZt$09Iʜ[-8jS~ehr[c%ؐa3A{8fV9m$&[҇Tt;VA`"OO^c{й5pgx Ý)"9.( "f^bid<&E w(vgvE6sK9@h0.*κ~N(_ħ`1_|fڛ|y䪳)oˏ}^|ɥ:LnЛA[7־do_;f~S=Xߺ}?qv~3A/ 'K__ߧÿ0]9NN&◘$y[c:;ϳ*s}dqwrm&)>'m8wLRiKu0C*XP)=3a*\avV<T[hnuF~ZwO+x$JdϓC^nL{N4*5\tgnUUis"n~%YÌgi}deW9I7<{jhy."#̦mmz=q /bmv;/HkV?+wعܰϯ>4>`XyQ. Ph<;X=AEBw:fGB &A ~c}1ۅ gK}׾h# ̹Xc;^Iiof齃}9i|I]G;hyl֡Zu7ҙ Gߦ▻䅘QxAM⽆#}כM ǵ:I홓+4WnX5qcɻ翴:c=F EqaJG*1%P,gabY$ThnuYS񎹯羚ɇ#ާWCD qt32 `1 Y/1jLzI]<,a2JZliZ&3- BǪI屠qhi;cI'eFά BF«SRa"VbZ'wgRϤ Iƭqa1)x]Y}( "hfPƃ RSi갼 x$ )[X4&zz-#0'Z0iȬf5ׯ]pAdUTpdNc Y֖ ..`yIL &˻Xr)EZX,`D5OCv$*qoDN;'j8Uy.*ٸwJ&';aނMDHE*Oz@qI붒)JH=wEUP޵+"i8R`,8Hf|lKN&%#YBٲ8]UXZ?! o"o>-LqɗdQJj_3*ta38VA(]'"n4y٧-Zn_.^_O>^_^ƆSMC2_"ؠY`T\22!d<.P[U# h*$Wض%P.oݯ956Ni<݌;ڪ֖^4)P+?NB4 Y& "KH1q`>Qu l!32V4Xפ,Ȫ0ȱ͖׼ D(R0JPE!/c6ȬoљLm'CNE T= h+Ξ-iVRHX#6#gF"^-ٌKՋX/ 8*3wS i VHL`@HA8 ƄA/B/>lCX T؞yWGc8)*6`G?=I7 1m . :(U6Υ "1w!x$}'d̹DIpVTxjGtqA9rR X!  s UsPEf(Ü`ڢκN=0ŕv9*e3|{KӠx2ym#f_aA!8K"I)U0l<C0$W@<`J~,Eۮ9^<*lw {2&vYwWrkq*;ރݽv{&RO_yf2~EIC(󠠃젋ZTWlh8`?cGsgTf@ElV,*:[õEz-D-MTB?ȪzZn(uץW뀒'tǿ?a9Дu:bb)L p1K>8Jy{N;Y/(P OloYC&)j,/Tʐ1hЌg9N&weak/VN9AxĮ,,0e|?%ەLW]S="nK+ﮖjYX)7`Vz %!0Q{/% ۑBFY4fGB'xARVf G!`WpJoӳm>-퀴 iIޞB0J2,͇[^Y歎W{J'-` MepsZO^Ӿ giI^<L=9CS#L4//L>[ m]-8FJY(`PhN%(TA{LD'@ǂ9d>䅳sm} \&BLE/u1Վq ؎Lio`@{7^7:7?'/ѿ;~6=9?NXخ~mZ8-KmT5;߱\Yz;R~ ge}=}ne\ukf=T]׶Ib&n=va/jY|^7,Gw|w+?,_h߹gso0FsiD~TXx7^G& l?^;?_Bg8ۚQainz{]aBrFbWTVh t佊& ~q5^[`>Bi5/Room!.UcPkq5uӐl2$iEԎFY" ^F!b0i^6t9b B( m& )AeSQNҺ{3rYCwTWRWO}tjVf9|  &%NRn07l>s1 ;QwUr0iG{61Q(Ɂ7?Y{dYƒ3FtNMPT_ $-Z dٶtP"L5 20]heMTHAFjaNN^ Qis[d:0paãroDɻDNQb[Ax!#w;Zٓ?(F^Wpo}LʇS,d\^dAαj.@"BY >g>NK\*Y`Y+$M(')Nɠ5M}xːsB`|(" lN*V4)"uIl"1ms`"`- Ӷpz >uf|ل]=>u. rCGw<:~|z^lϤ1t< n=|ǖ?"3xj->^F3lg \LgEt%~מz3m;k.9teCZ|fotc~ao 3h=\eUy-j_YegHZ0 I>j DJ$ٺ> P$D,| .P IzOD2KLP sE&ŀ "T6B(եu#V>˯v ֞AˏTmy; "SpOS7爹%3o>UOju `tk{H3A[Ti}-EJ-#(+"aIiDb,ّAdg/RXY_s8ddcM+h}:k/oVl*CҡdǘZoF~DYu95Ѷ-hyKxO96Wk0~MF`q[n[^?o'jCJ$4 Sz'sCd E̺t6au3`D9{+ڻKJXiQ%KlTZy+R ! E;d%jAHf,='xgVA\b<_|[Ytc9kF΁r@zFQS!"`8 2 HN넼䔠F( قqc z -Ek>@O?33ҀJԘI) g-|- :BX.Ljz;i%_/?>cRj/cVyou&BE a%DV%svPM,~Yk[a'ቍmb[:Kj7} ~|P`9#P<-dmh2IYFe m<"L4w5u_l+|X`!|Xq5B5)TB)ŋȨFb~IE rvcsfrtaNYX)^Nk!]?dǞKwƞ@F,` 6̓MQ]5,On(*|߁ìcms! (;izxW"]OY M^z)qȌ(EU yE`)E%M`|NCVF%)m˗ W9sJF0}>61)Ʌp.F90zj?e.=o_wct$(\J:6"2QEE$[X#9QLΐoAG3\Lǃn2͠m"lV]WNJ B eȑ![m0+dq_+=CΣ w:\/gj[wǾ_.yY,|`0gw3m͗.TXRO_pyZϣdj_@J YEE ^ݱס[\[٢U($DƘEh>Y[~3\w|)ϫIL9h?W6dJ-[kw޵#/{L|?Ygv&LffXY;-Y~R$ʑ`IEvՏUdssyTXbLgmT oh R)CÿgV\U&.iVu?%/gŲr1T2 | bᲞ[oviXmz6e]\brapZՓ+{bឮVuCVwSGxefy`Xa`ŴDOn] NɕxAuX%j$] sIs` .5=~"툟S\ 6oSqf)}_}q_&.ŅU{i+Kꫨ!m/'p|U' M 4q]kDh`Pp{p%_~ǻO/^Wӻ.&{׋?Zx/RrU$`3 ?F߁{]]ko5trMU9߿W+_Ab;E`վO4nܴ,/H$wd{@37L,X2XyĘǔ'e[={%[CwrWo{l쒏LyICl2DIK 'x,CsiY{`|w+t\z@j}X.|04E iƒ >)lU/={~R(*'oK3y70=x:\ ;Lx-eލܥD߷^D-bdv!a rQQ&R/ PjES;#Q%eTHeIhS/r[ObZ~oCbm-@sAqu7]ԍwه`EKQU ƃI 蠸x󪨣KC"~CkFD>-k,n,s֯9>"℡%^=tJz,pwK<+A f \%r):J2up䪅gWbIU"XУDW@*QIY W\#+ XٻJ u,pUW@DgW*U"D.?DWJ[zp7+UX9*As^x +OR(M+14X؅IputI2+2 8+cdJF#JnEp6ݍ#9vs.Yڣu@s,+~ ǘ>'$P1 '_:bQXЗ~w˝!C=1Qq)M@k)B.]e}΍D"("ZYfNZI@ΜV A9=gwzzA?b?Zwmܞ",5) kLڕ၁eR%)mPu{x66Iѷ<#˟LX$i)ڲi{bIͬ7PkچAP0vY4!zb>9HysQ b5ll }71SeP.*gH8rk*>yx*fKR7vUuc}#H3x7&%eK0œ%Q [ZcI" eUvQSDU"t*yaK#ʐQ,V1(%TSB ~cDsL(QmGsXGPR6tƸ#aREbH2DYECRv*!=:r9>$Mk} V-'o=ONGoSحřTC9A(%^ @6Wx9ARK.uRkb !Ko}8ldNa9dV3ŞpG#.Ac2(5JxdsG ƍ3„hUdq'ɮc/=?ޡJ7}6}o}8@R0VHI<%0OR܂ck&bq+bL˻%)xg2DVLApJ7dbi1g;Jaظ܆^6r Bj6Zw I:tpR* 9gL`]LZt\t ozflv}v] |Oe.{lnbکNݒ=ک-0yf<9o =(JB`+w"G΀(9FcP\hgӘ1j[|?]Py>?~>'6z:kZޏ/JP* VLh#V˔ h%E} Ac<4 +"^!r-0 %pcL9nUڈ⡂{GrfV_#P5_ݿot{A}~&.7b =ZPfAitpl{O <ħ@ Km6@Pk"c<"I3QsG<3O/}|y7qXjL^#UNAǁDA$Aؒ> yrxX>v:ރg9^0`"RqA8h9y̚ʍߡ`Z&n9!u+ mbnP2Rr[v=yl͉̤/$MM@A9a [z&Lt+ 0"'/O'?>|mF+3>b`FYpIMwnn-X G`Puƣޠ;7:XSNyН/B )쿂V\ۅ wamFxI zb("S);Ŋv4B^\xP}|=z[*gQ7C0> W5Qtg/ϋ?UHK,TX*~L{Nb6NUrYQEE&ßO<|*$,BEKT%0yiy,Bh^ˮeu57q/W͇R"'CMM4^ Hgi6G#(7J;imTK~mf6,N꾉`S}aSmaj#oN(׸<SM<C`NExaЋXVW9!6Rn±ݳR>rw:PūY>uwk8oX'm5CYOi|}+8)?8õjN!cKfaTTZ3V.8_ 1nu Iw EG!R 6`f e< 5u*waLG eZbsTOe4Dk45!-llxQ>=uȿ-e*8EAq <1YJHˬ kKYJz XQ$&n0&˻?a < ( qXrTJ);E.r j0Z<ŌK7]OJ!`: {9[~}ON&U-Ud+C笨HE*z@qI붒)JH ;"*( _B|PFПEXS\m3U`̙R191 qƶX[,<*,\P6%x6g]X&.V7;&Ó_W(M?M9b($^i6cƂ;PT:C0Lh  W)o)Kϕy(LC2'sn` %l ,1- 9r%EzĶÁ_21qǶM26nQ`O 53x\Š 30t`!LQ{!AsH;r!c2hh5> 'H P>riLf:lAsǺz'sAwRW9?yO^ɻYj8މ` ~3y6ya7r)z}n2_j=v/ S2(Pz"`,%9ԁ[,A/Fx7"Yc _b}ʌ(R![g8)q$ZI<tM7{*[DlD끇}k7y-?4 me$ohRn,uTU~]'?ZZ1fE5E͇3d4NP}sQӜOEBW`J\2_Ã'KP' ˓bj``F=DisS?-8+|IP&sJF~g<P}R?h]"HiD30I;u.I%u֢F'6DB-MEq/  F8Bd]wXߚ2n=WeHxC%R\,㬲Q1~Y B,ǃX4nr W_NG/S4þ)hm2 -64=w 6'Ks_+FfZ ۅmo>a(0䱮5囹/,켋2R[eR9+!eԞ<Pc܇{L:IV= |joSȮfqrWkMu t{Lh#aJ<$2bFXp\ ou7s=GǠǿm mo< ќ(§all=hqYjaʓp*O";de9't™21'SQ͋rDw}mrI,υV()5 "tJTj Rpc)n71x xˤ'-yϐ[Zx۶m!{UT=͌6{#P?G}"DA QAi9ge!%ʹm{{{{ces#;- uQ眶DjdV|+qSǩ&ՔpG?38]Bj֬Pr^ZlHtqֶC~mJ;}lU ='"L2Č!HTt.RGQYI!9^jUpoH62Km`seo597K3b.̘2_ ɓwQzt<gmo|/fMQ_i+z,KTD {x>Blr>Lؗ&(AsebI(Pz0/_BKq{=9`!}xy^py5<ɯGyeg-WGDH iUt䙂MPF;6irWQd^бBBiuoZWʘ>35e5⛯W]Ǯm&{E '5Jb䵓&@œh鈦cP", ; &y.4JQ#e 7ƃUE. Q(]P`H躄jgl#ej@|rZFsv#x\'\,g@a/8L %HxHI)o%@rTIp\Χr]fiU3%#SJAiXHڀF+4F"EBLݗ"vo qO(%JGp[V@04˅0Fy@uJ!&rGgADVĄ(f򤘥&FQ5Q+ɳA11z78Rtߧ8\05POW!P36Hk}Ri]` #Ʉ)gXR_K8Pdq/Z`jW hk#rgKU@78II 4w0=ǵQL_n\B7w rw1!VdKUpvv~d(bFbrU/ҀpI9&e'!qr|JUcE݌onyW5>-]U6%0g`Ors+"D/Y';Zn0^!m3I:3MڧomfY> !8W< w0b՜i:+#,mֵ*ri9 D'>="JEZ뫵5WR~qa9X2f~L_6COyg b~g\=:hu2w? \Oàqmd%E$!˳.zUBp e]^H~X*Pd."tV6Qz,N{ZaM^nSۮ!XZQ~T}u%zE˯UͺfE5E]?3OFӚ?uEy}]|vh(?,@l,.[z>_ #u!`-+ 9u, RK@[tRR}r{g_Azh'OuA/tQFjSp9堠4 Mbtx*4d "2 Us4 H3CMP&]aMQ娀־ŀKeK.,;cJ1nw =>Uc9&@ tOcuY۬>tTO )xEQJ]bjwY}xA}λvnVb{ l~۶W|q+=oLJ\r}`5ohmuuPY|]-qcKlt6>6rSz5w XnmL敺Z˺׉t.NK.AQ[: 2߽j})WKGx RK, </Q$H72^ *TG#sI.5.AZFb; 堔"{Į`Mް .'®2}o*C)IϮCv` {Į2Z / !T:Prٳ]I +&KVbjG.{oGuܨ"X$b[ac/7ggՃV`{ a`OmH0 #n/`܇}AZù;YEH(򙖜eN)bxЄ K X,IQF)3ZdL0&YEmbdwEQ10 f1cr9+ݙV`ڳ"bٖ;=!vRw!؂3]Iwt Jp*=rtPnEJr)n]{k ]%5CZr eeӭ+%0m2V9͖j?]%ututS&6XUx[*'vPV쎮ZNWz.J4Z.;-]mRV0HQ}*vtc"-XՈ#%]!]I0-+LP{*=A-2|tPλ:GI& +.[CW dmVS+@Q]#]1R趈0#5ty};ЧD⎮ΐBW,'X碣+8רEtd++n ]%T:]%ttut%ԸMK 0kU[ԞJztPΑFɻ'YEWV+tt4 *]yڴ3YjOh %뼫CW %Atg\OWeG#de5)ނpGW=֚`"Vpm+@t(Ong+B7~:8tj܌U\@H͗ng/< I`ru-Pw.ioy qƬQ6e;BVU/ n?@ɮzE*anϻ?I.R~]gy_LUoʾRQeMDY344AR鄡6ſmao #0LKah2)S{?,>{E]xKQ?rKjQI?L{Ђ,)N2A%V+ +Tt"(bz73|4Tizs (~s{SX}("YXEok#=u.4^ozW' 6$Vf~>I';ί9']ޠ?FW$ )[^nT1 Ck8c -ǧ%w:c]K1Rg\ǹUluȺs{UbU:TҌ-<7jҌ1S0fߍn~|,f[:h4˜pgG6oWjnTuVԆzt'xdEeAñI~< c3I373SdIY]XR;)4W5({ ?@x͕QQMDE3iP5 jEy0e,ZO]yj&$=򑤠e#!ΘjQ9 y\$CR.9ÐQB+N5,0M7:., ~,KV kT4>5v#:HKITmXȔT:{g;n@9ϴ>  u2e+XP`V:&薣fvQ_ōlpc5"[ԛ'c%+G>+VKQ[pbM[f^av OlFT} ;ޣVyX]+M` OEq)A#xɅ el+}=[hٕ *_2tjg ^1Fn>E.W-h)MQ;^GuxV̰ɭ楫 {#~Sp(}N&D0ڈ })'rg=C 8/ hI `wbFT| S)#3:tɍ2{ӏ~{vr]g?Ѝ](}zJBq(vUnt5}Jue/,"Vyu^^ɀiڻø[ka+Ž;>UYQ/Sك0v؝Ք#5nY<Syr%r N,^?:ϳxctt?]MUKZCW n%]!]ILQ!Ҟ\FBW NW ]!]),k=sJpEkV|0(E%[GWCW:]]&Dmڰ?]JCWГ @3v gGpZ}`p+صd "]:q*5tJSΐϛk_ZUT[P{W %U]!]QJQAíWM7SΐX&rZNrwutlۍB&I] {3 󂋗2VO*V˥fw?{)Ifռ{ǻw&]}r/GрmP f N? &йJJ]V 4-tkZ o H.p\?oz-^ڠVybg>SGZQeҏ'a9]H˖xٛœrک峷ߗܤ./ r\yD&z/j4 ^QA!\\a"{&CDLJ!.DpCLDK9 Q Ƒ`*=24i\ͨ,&W&񽚁? SE-42ò00fmq@~AU4N?u)2 V)_,zQJ$e!h:A׳x @ҒK) ξhӪ4И~Kׇ #}Uw !J2Tв[_ìw.0o)Ĵyȵ,K1'=Ra?cyEÿrĤV7ُ D0Xx5͘>3@CH1ytt̒Ȇ%썯a÷|cˢcai>ȇRI3(} ()*<託qa^FC A@5 )[wZk1Q a ih?l f0w n~ >p6:WAnlh&A]u 9.Fm?y ryӯYXߟԐ|4A|/z42VvYmkMGc%t$Gh$^ bO2(AtV^/s8 ,މB Jfsm/v]>|_xfL ɕ!]yZmnG7my6+ x~ ߣon+ucȤ,Ӝf)nGtFr\slp&b씣M'N+dW\J#KMMVG #p"&tߚDDEL<4#iJJpp^D%ZA۠FH4rf!JLpГ~K6cez2Sm[U$W ]YQ>UمJ6^l3rAiR,9 8bNdD*0S;Yic`8P]kFuۘ.ҕF,KE!zSTxTIm9FZS&=KA0eLpέt&Ф ܽu >1g)AcA67:Q, CL:I&cTȀW+}!pg͙G錕iu4!Zo:9.N`[XP$ˋ_?rvʣr#-$tgt6N" ,ar׆f*diO.02xNJ(;٭{&-{?w+W=֮>7wWOS cc:%?&`ZgqG%2G'R"YqT,N<2N`66 (WKM)qZFe)Ax%LKޏA57uR3; ч[|+=!P.%xqJ0 i?{Ƒ/ }0Nv$'!TK\S$ T )J"%jɔMfwOOUu!x"J5qxQ$Wh/a0uMF+b`ZZ6LF p"+dF2pw'ۋݗ00ֵ:g%MF~1'W<"/$C\JkuvIr^SRd!B $3-Ǚ`$?T[VĤhfI3Zq\+dL$ w2'cTv}zCq% 1t&ʹJ)79Mnj>[Z?eO_ ?]Z\d\[e?o-"0F(H0DꯩyFc|ֿ fZ{q䲸؞_luqקlj8)5QSTq0J('hLNE#7¹\MgG0uwo۠FRV SXF2"WV3^I(=<=TY>x:ߍ\XjKwpQ]$*ʾZ Qw9]n ؗ(l \9Ih3B`rr)J5#qu噺U<\S{Ms쬻zU8[3\N}Ӡz֮-kb(BR~?@><}} dHm# ~MðqUbya8j0 W0bGbc-vTFnuFZjua#y`$SįTeKzDQ`m=5b~?,ǯ44á&MXFC6a+hrކgU躸ТSw ͸#oǵdyyl Ġ .M>(7o˫:ƭ:Uibty^~CQT/\Q:hwrt#^rI"8j P(%2Y6l`b4(=a.{zp>25DΨ3UJH)etZ2˰Kd8"8b)";{\&>]wҵΞKk䪃ӧa2Xh0xms p]PdBp"XU3cx-wpy!0SUdO]=YL#Qyg\C6(<޲Ds$*A\ V{+A8_T8By25\A$IJ6˂FHZcLH{Q.Ge%.Wl:w GQhDtSnI0>{2 57Kzg؈=7Dk)+p~RXcYP;t7Z Bd Mg!КX Cik1 XR?ETy <$%FΨH AƤQ N9H$eL0M i!ĩ\Q[.B*g8P [ƹDlat4 O x2xO""Xe&^vܖU5Ysfl,&}st0PCQSSΝ#=;hf%j9RB/ZRΝ:3`"m헷',g4u7n1_Y]u;堻~UawS;zTVƊTjwK&9AOzU0tgP i"Ip*E61.:iPguse!w?^lJ$ v9=Y~~?k%_N>Sѐ #5zsouzc(\9exR!r Ԑ"< "dZ޷ `& nÕ:bH֨__=O׎GA! Sv K5؁-n靈fnޭ#|qi1:A@+M-%**ljsQ-EpJo$?'8n;wޙhr]{o[Vo4Izo܅3\k2~t?\cOv?tނh_> \2OOUI$#Yl'!O<x4 ¬aj?n:igOFQ oamxyNR*7D)9bTrg ?Fؼzwa-6j/9!< g~= ~U:z#>eZ:Zkᔩ 7$ (i/YЫO2FmcFq6hÿߘq7h[|fٯ qi˻{wk~ala#ON |;Ts mCH>Y9nz/O>7ן]QSoT9g=J .k:$?ɛސ'<'6AMhG p[`sXUlۦײDLw-3U;{f}^mxO C:y𑧄2I4޳aVw76̫} 혷1ӖL̴"wq@ u޴Sb ȓ(a8Xwv|AIOlq"pmu11]ODϼT(̑9$KQ'σ2yϫZ/x;hvh"ԨI "i !D%8ÂsC sS8RL!{VmVjӊء\XyFX_ړԟ(#Z~]lzݠQǃi8?@Xd1Zx\:ֆrpBêޥR{KE/!!Hs9xcRcb@s\H-&Չx5P21C7_r>6e(zF 1'yg3UX-6- o}їehJrFr9K@#nN j= Njj8j^,,D'eq , 9 2ev0 4Bh%7FPeZ 3jͨp:iX |&u/EK-F'{> p6hTmOX&gIVw;ؚ z ЛKsQT"9q@ly oAFɴAvOL$Ĩc8jB{bM)C`:&sd4) !*T J#c܍dbXX2vBzggͳdlN.7`y F3Gl4,朴Tɀ"gN(ˑa3WHI5 ׋4!Kbx YsϹYMtP+٨v$J ڔKRj:w# ㊊/]:vEmV9y`fz<"Yk@GM6*R2mI*7 ԐQ2A#D A1`&(!W'$#< ڹxXl:wa/,20MAR+"ˆx@[572W%4PbF e Υ$J ,Q/!Dk)pgVC6#Di1 QQ 2 j&Ԥ9̇e}MnDlbG\.YJvEQx@u!iXCIU4 >zЊZ!=ca1ea<#@AQmWGi |uhp;x?~"Ge-.z$/I] 5Ջ &16mAcݼWEՋ7};O}=&>Fȏ~^ !;EyK]!_43S(M g8pe:KW4}/zዿwNx)3=:^u<ܶݘ\/rཥ4ذH|^C{6IMyc/jqjq3O|{8@o6Ns>iRwT a¶^dܖ_UHh>YNu<G\T?/|D]/2ިn .$05k(BuLDSb94мѡg$šyǹl/5+Mmp"^儐¬V Q-C-[xR>Q0wԢVfeeCUL%w5ܦ@EF{;-tL"2&N$j)R"YiJB[j>Z a-F;ʐ݋.%(֡冿:^nEو)t-^Rۺ~A'mC-,PvHu!`\QF"e(!>j:OV>(h5 2FbfHm4P; z(;Ǎ4ᣱgږ"ۍc2S1/3c^`?M8&t] }1>KEA͆6PEWtåZR+ҖRwJ:*"Wfٔooy)lp-Bzw,lbSV{W z6vOMyy%z'k>;_?Mjn^:ʯ$}5G8uB@lރ7 i/~B"W٢B"f+ 4!43edLճFhMލ S`Ô4Yj2\/N}+Hԧj6z57[0@99rs :=4Z$]LKwRerņ]jqcWw} i68;u;ǧ}}mg?-,+eTu/^Ӈ#ta9Zz)|F;V'N|fy! (fSŊ|l5JWǃ/GL_@L?k^K[Юi.q{?ͫ+|ʶܿ}l/g_u,G_?sHvV98 wo}::z4``{@vNqT /\,Z.Og>s%|%K?&۽z卋^\>ZkϿ}>3+`o9E Vy EtEVYŲ^^ͽf.$nN[/+^;藷/a{6i6Vױt.v\ę8q\cș%f@:ǵG&r){#znU9q|7x8oy{ Fώˬo8,f^DC>qQo2 f/ԋ9.Q)fټk׌i.ԜxRzqHrRte$r1ϽY.1Jtw^Ɯ tv_ ō׷ezm~/H거7pv崴'"E'9Jΐv.M]D(!ؤ-ZIWzC]R;Rgfh~^=n8[.wD*K7TPTPLfw K~W`e0W(þ OXK`;z7'}2p --b6n~p-=W̮3Ue\OH]6G9QvLu' ^R4ZUaf+R,1jj0br]DXS(S !nsz w;]syxwzqⱫj4gr_{P>mi_ao #(>mqwnw.pG] {uSʊ] qw u5оϧuOW?!]#%!Au ] oWq(eoN/1lV~pj>v-q9.vsp$zp|,9cag5ƅ7o gE_/ܮMLnނ?02t 9ǻƬߏ`w>˩&wVsV_y(3(1ʛֿM?kdGv4 oRY9QܤU*x,62v` ۶mo_b;(Ҋ ۰;[_]gO<J=] tuK,;K?84f^Q:~f}_b0&'g8~{2u:LC˟GrU(ijވr A |:LN>qi8;?>~;pתo}3Ίu($'n`!?{|덧Nߘ'c1Yo߻]jgW}pњBr*L&;jMb^Hcnf't<!<-jzq~7)Ynr 9r_ 9d%P5)U(͸݅DPbK!]T?wא,^F|:n=mϐ9mcxãھ]•lOTF9jz-䘔mdQhP]/85hsB1{BR*&UfժO9W[*qSN$ݛRy0٤H%swg?-4KgWjߺ16R#X&3Z$;Vb D[K!Bm0!2V1Jk`Didь}'R)rME屁\?rI5K-;/vmLJJ$5cz-F6 Tn#D34f"s E'$Ei!S2W၈&3 }|_huT=cPvُ2VDRC5`17i !ƬRN{'mvZ ߚQQ"π@ЈF剤9~j'T&C*d2Zڋ#Q1'u!g5G,">59 Ƭ*M]ֽ*LC*)*Ftt@r=RNP ^^W0'Hѻ);RcIE-)$qu$~bs€bkGq}=@,R!HNGBf5z.WP2 yp)TXӍkƧh98ꩳMgEƥ: @fY2-RW!S#XkCv:v4!kGxHS`b W 5h @kahvv۴j%A$8<ĪJYɠ-.L cMm̭n8b"$l0vl*L;ub{EKx[( W!elk#)ٰ֞[r0*T-JP_;wMBAyuM!'[X]b ~N֠ke j$`ԔH21&Q} V9QmN2𼊞[PB]0w\*͐jPo]\?T@ 6N[ @Ce 9 qN0t AC\DoS/%':S ROp fO9(y3XGfA@7J [j(ά@Q"ō _`A(xgxwD"=Aw/:BJ!u\F*ބIQܢݳ.RS+ե3gd0-hƻz+YLH T&%"p` 6A ˲u1! Ұ64]k?6|p.#hҴ k| X\[Ō4ıb%"914p U$6oʐvND0B5}!v6vՙƢ,eQ_^z+UՂY{Y.0Amz`-$ >:%@uPi0^4BϛБuJQ+J2BCTXrKf cwhp R|$ L&rZd^[0`yt_ 5PbP\F#H܎m }÷ WgUS Yߚ^mym|Y+ 1Hk#C6^?=^.f. Of ʘ`f̾ ]{ #EK|uI>m xC$*Kt]/sI3w51 jki,D@hg.ɘ+y 9@'^EWHPvm!꬚QbroqGВY`EȚB܌5v`m,BgVI3J+1A2?AjDao8UKaVBn}j`Y RNXYBkр?vX+, Mkj3+io k*ZGo4^pj;BZFA6Hj801yr`YB*`Hk:R@kry8s<_?t9;\oǴgGsp&Iq`0u]-'G`3FOKauh0Iv#o'S"bjmTx֚B$K9OՓFCoo1F:_ =>=A4̈=X)5x7LJxKd]`)l?TF m yI,WRSLw.TP=`+aH(Č, = >`J"ʣ۬]QB?XbDImb>YH&`~Eʟu7(V1 /$a )k@40-K =w ΃BR WSp"i7[ftx&MU^()`%ެQ08嗍/]Y5ElUWHi8.VY/* EC3X5D'*#JI u5b$p> pI1&ts@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 4$RJA' >Imt' $@)kq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'FyI) 7-$*:I 5>$P:I1&}jOBGLcQ'?NfSo|}:/kY[(~oŧ&hon ~ pX\_]tmW/]7p7]\æM iqy{yG[ 7e6%f2p~uZ=[X==xuуzϣwֺ*L]˦+pkje߫o3/{~.ڻ͆qRl"ʘ667{uYqUt\VUUtUѢ)CPm#-_#ǍO#osX7//TUj-jc2_ǽ3Fycv׮[vEV0Ee0.5aT۰)Z-3F\ )5OY#"!,l ]_DJXW#ԕ^ {2B\) iUBJ70ƨ+\[l6֞`p siuA2Fs fC\KFWHV)#_z;r[6;;ZWi 4ڡCK4=%ʱvmzkUt壈ҥvt+U6;H%j Q[AiH$ܠDZrRu;:73/u⬫+ecJ?@& |=6l-3tYW6=L$!]r7jsRźp{B`e%]!Tt!{]u5B]wt/97(* "ejrNn yie+-Doxd ?ЃӺ٬˒O7's?|]q >󵐒zeYn6s\e{Y)ݫb# d؈k\DZsb#%G9g+JC)2 iFDua(UW^z q ]!K Rz6F]t'+čdf*˙ )WWcUTFJAvPJW]W&w]!loGWa˦BY $;3p7ц]%QzCCjצ*( $+:ڔD]WHjRZD# "+dtZ+u5B]i8OiCHWTW]!K?F`]u5B]mքta iU]WHiP{:B ic]W@jrFzOR;Ig0*BڡVoLuQ/L X4ژ`(unf]q+B a q%]!r )5ϻVR&D6fg(P7eca_jOvb`] F(CULUd]!]W*BZsRFźu6;TdtJSu&F+|i!J iUBJ(uevLdH׿ji/]!mA u5J]Q] OIf Y#i\߂͙jtE+$Ѽ ŕgznd %ߘOBW 9(]q~Em񞐒<9%+pt"WTtuu5F]a(f` ]!!si]W@is{4UtVl6v7 "mYԂ﷎QW;!]6tf!#S]!m4 (ѕ5G͖J5>? DJrhFl{YW4 RkIHW<`0wৈ%4JJЕb]h!!]!pdtR4ܢMWѕ'ҕ™4w`U )]d]PW k) dt:P:2Xtæ7VjMwt%v~!+nqus{o4^\p=j)GZjUKkC%J`l0mc*] AUԕQpʻOw%~2pOQoG)aaq=|q6lveᇮ;wWŬ&J̡/_}z}cw/c(no}2)b9ˢJWت*E/M~1'~6_G׋m>N?~,px_c;osH7WMhWoݿwꋞpU.곻x߿-J?c.? N_p٬?S=/wOs_&d7t;5Zy[4ɬ*nͱ( 6?+uR e-L8)BU덂'B]pMM*Tae6 ·k7"?YY_4xB8G(q,N7 ?n/W͗5~&X/V|m~q+~fw~,g9-O=5T{꼽=۲OyF/\\<+?L/;fq}s_m_ZͫC :ĻF!;~~Q-|iy'pt]#X2}lä9̾Jrkءgq8tAgD*,4zOK[)AݶIB5UKOV͗&gp\ak&eLv1s/la0V7˺~k  e~T~ΕUWZ[.IqyDyj5ΣzyO=^/@sP-ǞAy Y**8gtv UӖn|FNTiT[WPeϔ":4fxp՗:燩y:qnx:&ƕwm e(tN7]TT"ꠣ-+/.mѵkX9D?~$Ļs=hi3(WC !:.K厥\] zY}v[, Z 4Xl$$ټ Ǐ;kƇ$ ux:m mmݔ;ktcLeؘc5P1x5|07pOOfϟ Np4y3*~n,8s7Pg/. b9'_\>@'kQ"e zSZsMjJ^;b#x{ ;Vnj/ϋ践*u޵4$鿂evd5izkzGVSigJ(#/U^vۦ. EP$ Iv""/220A)9eYQc@%$ T|=TO3w _*{ MAt>+x2am(m kԄهyŚitޥ!ePXM]6"( 2wHE<'=ZשGWm!+0{_j F%pNJ T%бw֝5x#zz<fe6_3YCoGki7 6sh=|&fi_˺t[3'~/_\0G^^A{ nYO~;CO/,|RCgT(Z U2/rtqI;"k4,i&'aiTSW(HNp9$hUPl!XDQb0JOԝ Q(Ɛ+&J#b,t Udκs;-tsB|am浏E=z4x!{xX.!A̟nCLwSRz .ܟgG<3?3G?b鼋pxeWDjtH Bɔe{tޣEֱQʩcJEV@4)H')DECNtΙ-7 H3H64ɻ$o:ctYw Yi:\g/w3\楴0LjDnt^!m˷֛pooe:˗MlLr2{OՋUxGY['ߞų;kQ<ϳڻf*]u B䭜 n_K7һNtUf7ٻl+ lҺNׅ|yny/[>v=}dBC=S"`:DOY͢sޱZ.,wo<)uֻ6ĝp}b2xy?`eCybgVИBCV&ڤbP)1G=IIz51$B.m$L * ֕IJAH(cK>bD %0Gmʤ)MNtHUcYwE3˯`lv9^dwTDV^S ~'7xw@Jf~Ʒ\ls o7 q/K7}!7=,O  |{`_sޙ|Lm.BEK=1AK+aTb1`.eErVbC1SD)hAɆ+\bV*{H^φMfM k];#jCNɃ"%x}5,vt^Sռ5֧Fo;ɹx-MpGW۷ꢸh%tEH`W(43FEsQ:ȢKDN53mn= vhpg'KKJX0ȒAjbT)x|Q2JZ/9aKyl ZJ:ֳκz+PVPGw *?~ ]G6. {'{ DtgcBB]" C~fgh)J P @aG4z^$Ԇ5%ICV]j d+N~I^$5:;{^!w?zܨ1\u:F%T&EF%k|^썒 VC$`E$[FsV!gH:ċ)C[hf4)JgtΨ + A6ѣc2+BT .ޟ : T /sǫهoن(7Opv2| *=k"/W6a܋<b[KVmmI-]nkFnofօ-+bQkx,֕|4ulmmUݭ.խcui=>:R6,>MbjU h~vc=0WVwG\}q4e9ī+ږ̏$trs4\3}GqQ:zP-K͇8+~۳o<_ǟޞij7?Z~/ۺ⛻oi/o~ٿi)j[6>M.G}.?n׋Ԛd 0 /Sݍ+.c'Gcfi;'JU-Q\OHAFE |?Vm'Ș(=ai]v1brRE)e.NZL N~HSVAa R24L(㟲7fo&|PՓgDu0hqJ"$rSe  j/xmt:8]FNҼ"I K>9q:P#0:d Bvκs?Lb8K~l'ljL=8lq]w _>uéਓE\܈n-Vq.}aiEWӁRA'E$ BFNh u-6(:ݬ뚙= 5:MJEu.4Tm "#u ySؿG7Wg eC/fMxasD9L֜P"&(u [Iz+? /oC0<ԏSlYnwױvy$L>OG-$>*Jwdh3(,eft#3>|l5=ym'14a{Ȳ*8`exK)[6gj{8_!8R&8`;l=o{ ~HIQ{zԛ5Llg4S]S= *\MJDMyMsxF*^{:f*E؄xh^Ot6_x{[8[Nn|sNRܖ]7~v< {55{^ 7/v*d@qvgٝ?ƤWIԟh757t|z>/:m~ևg{̖ U+ȤĽ(lN\-q[t[:_(E*y_R*Z kYTB$c6NF%K_C"^Y",s9sꙜ)\`N InLxMB!h~?LjL,[%sCͲ$Y \t+íDe{hj;'i\YZ,*`S"JZK\1ƆL݆_/-AK󲮧e-C|vq(Qj3=9(1Eˋ$#Ԇw +trݯdm|I}ڪ|/۟?sILTmV ܋U T:s ?p6'G2PL&[ yZzjٍJ3,lb!?b /%m[Wf<29\WwXl5 f3Gl*y*ٺI; Vr)B`0@$eciB$ܢ¦&j .؎9]b.,bʹc_QQ`/@ZdےprBhDT1F)2wmw*N 9!2`M,j/|HE%Vm4a3qvao,*0̩c_DQFn9z* fA b̭g!/%*QxP2f:)"ӆgwED8Rfhnd1"dj"gw7սw>WCl%"5E:=3rSL,Bcf2tY4!E\ARd4wRx)厸x(xL;C~xcS԰ѫ;g!Ov|@1e?JԱv2@r[VdlT1u&9AI5@>] r=a $R!,骭Hv~Ir)h=Ҍ1rB,:m:x@sԖOBx.{o].+[tɈ c Sѹ%23 )Kq6(XShM 5ӉO{E~֥f}5ƺA' H'8s;(/:s;^KK2Q霷#@H]Є`Lere{EO-k}Lk'sMI5Ɣ,7*%_%ƌ`xa(/ Bnڑy^Zydue=/o"O184IHX ܤ,|my)3(Z0/u=VkkDvM=m{&5cΘ/ӻl|vtGvؑfΪgGӳuUUg藼h6WoPdE;aȳD ~Da:R})XH|lJ$>%SAƺ:F:d N.geme|dqmBJt>'V ]zvLD* yZ00YZXj&nJt>+Gto7%K ԧ%>9i->w^.&+?OaF?_e^Od1ϣDa,f]jYJ$'q 3ai_ϡIj.@CO:Q!1Q[AZt1Nߞ |8Vë4Rn<h.e Ы:b+|/Xۆoϵ CO'аM/zb81fX9\7udsNd<|9_ {Lzd?_hL Po{oMA\}|@zRԳ-`y9o Qgd;ݻ `]PuE)Q;rQ~ V^ +RĶrka!%EAd5V>Ȏ d!^H9 գi:c!r86Hb]\F:#QLts g+,aΤQ{e#:|؞h?x|lI=>W6./%~@dwN2?N\AIKi㤼^oG3s3*~lq8s/~'ꞝ ~ϾLȂ/$_2,9dgyQͭ7  ׿:[؞~k)qNskv) UK{f@4po (csAgt^wr{ĉ+vv~-cWp7hϺz>Alt&(vhmUN V^-Dt}1v7[)\`tV+yc7Lz|zB܇,\ۧmv8'Ck&-Tq;Rw%m:C+jy@}(J ɕwMGa*a?nXj>q|r`nco{~:F5 [lR eKO"piBAj*uK%kĭMMΉ5Bݒv_OWpϗ9u濫<執עxs(R3-WR$ce-LINN0I96\{ߦ`>q R2|PWI$EMωS1,i}zXgs%$8?A8wJKu [JۢҢktN҇*r\sQ/tb BjSJ LJeCNպl%>.V0f cBsh38fQQ; QB_D"EV+=&]{;4C@a!2&djnv PHS'ޘ$!M"A;P0T@Hw`!f 4JIJDWL_ZVaTqg#D 0s'hOy}61l2ET--QsvZ.xmβlĘHgqsT^*rr(Eb9J ͊trGk#D}%-0: z;}mv*z]=" 8}_!T8RXC ,I2 qX:eBd@cF?4Mbn/<Itf;!iZY1 FCiTXhڳ"B#2K o; L&.7WhLrrLXH;RLoa xLq vI,e0:-h;R|U֍zLj`DSIAW:ծ,@m y,jQnVaƄ!)(tL8:ۢ]EgMKa YE%d h*' ϊH8 d0!-hŝ]Ka[lESX.-m. v98l|@T q1$E f0a¶hSX,pD750tlYAA. Dd)UH vR#JAd*f\Jlj Lr; .e/F^ K3fDErԊ8G Lp?<>Н vٙUM6fB׬i/YA:EX=,(Ybw,)`I|tx YQљ!ۺtdʺ.zS*h`:]io#G+Db ` X` #)R>X\(Lt~bSOm( AJt+Sx[ -8tHp# V''4+JXk5ɜB^V0NVx`Ο&}ũawq2u1qWCjE+pmpa,@{%!z9H"P1Aޅ\*&@Q"Dh/",C>h6y2Ze 5옡ym[ rrI'#.aUmA13kFV(-5*t:͉ pd*Z]"ΠN`&YbPM״$_%< 覫2o&x ѹa0 m{FŬu$ڷ<5$#d`09 7zQ&#=}AaA] wQ"D,T@s1##J醇lns$AQ. A(<"!U@%NOp' `z[z-2'+Ap "ٙ)طyMieȅ }Em!o!EiQ-IUk`zwԳPuHa-0 AE Yi:).XZ#eFv#R#4YH>9VkKo[it$ 1X{di\r5Wh%}xYk8}C(1 gibw (rfF8pJf:$zAB~)#<ߙ#Kx>h9ʌۅ9mِCL*V՗Ko".ń@ݖp2"nׅK0p9X DB .-P] AzrےZ3[ P GUWknZgq6[ll߻t\PvmV l9WiB\=/EԎ+S%&mqG.j~vϽ`=YGƙzws q9f&äy@rKL &"&нM="&bGWz1zLϧ8/ CCL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 L tL $3/ ~9L aAZL H 1F&l!@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL $%1$Er@W=Zo#{&4@_#H9@ab@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1^&n__Y i3kd!֎@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL trgkoF~V}{}l_5`0nOձըukicc<_h]=Vm?^z˸܌e)ǯi/lc_0VRf֛8NËc鸤w"-ck,ѝ%["AQc7pZb@JK gickcy7W仉[N^> ׏|>Y=̉k#{xq<:VR?*WovY_>FQڃDmӮ,J[yg\6o+&[ d<š|y|CAGHt<9}?鸌.Y cXF=Y ~qr!@Qht ?OGwNrn~>L76d5j^\ձҬSr._aXw'.\~69?)oTǓv8^}8;<;KPo7:N%t7"v ޥ&r`g7/?l^QZ敉A!~Mw~=wLjl'^ r@DK ( :PNfɩ^6hع$YMR*l_cyzzG.z îRoY }i6 J0uJ`v@ˡ~4cvKۑׅ>qĹ:~(8{T&p[C(Bcd^D}:D˔ L&GCvOy5h%WFTܸ. =7{sq3Y+ 6@K)30ZƞX>ME.ٻ8-j蜎JդwE5PWJwQGU> "( 7׌yC1jJ}O=-ˬT6]+ )ϸ{geT.\|S㾀Jq ۱t7OLig}\-NCvr~_*ԫaw+}xz6~k/᲻QڮZN 5<}\d0F!qJI`?3vP b-x0q8 lQw|FSf)d:ΗNkF[NVcWE,wj%DŷB)2!,uLlr "^N ^pMϨz&/|G(%HL yR鼧gF5t'Qj;1J Im#3]lVKx`< /M<@C[QȂ*/FjQ׉wˍe(,-Wvy#&drK6OԕvA>_Ki/둲38ڑr< 5P}mi]P°+3IHSR8l$[͋l[ -9Wu JHp0qF͇ [{G5_ƵĂMqS<>_w?mv}>^#`Z-MguCqC}Z2jPWLȶ 2H[Lp+.3mE% ,w0` LvvȝS5ᤳԜK5-#] JAZ33,uMܔrQĹv`fަ89l>&`T[?=^%{9"Urmg䶦::&K~f\+mKAkN^^qҹi#*Cc1/?6v6.mk,:q#mG?v7Mg;znw#{Tv)Rl;X+BPWlFVPX&.s)ghچ@?wP*XSZ}86SE՜SBrRCUX5!A_re+N1yXf:ӐZ:X=0nrF/V\ksζHj6yf8w7ܽ edrkAS:.:`oڮ<Ԁ2|OMQ| (޴ )5h* 0WU&]֩",$/JL1a[Ij'<…`Y*S2:T JR!J#l=ٳFCsϵAtJ={{cZ1#<{I*r* Hh[jU1WfZy %G(7t49j\k$)εIY'Jm +H`%YD ={-&FxdϞpK3\kmI dVCTښ-5KM%RkJ??MeHB.VKĦCv UuɈ"Zb 1ⵯ<}^Z߳ꧏ۽ Ƥfq ~~ߺҠ?[G('yYbmkθob4ЖO8+xџCz.7>ŻߕvW7K gK^n9&d 3Gh q);x۶2ɧNL|:wCF4&yH &j Vq?a@מl7}kӢ2ܼYf!< iWߕO;(ad__mu.NN7u!;;H WS@.[Y!z ;7+ﴺz׾L'9_| ζ# s>u2~{[/v@U|ѯ}qwRcd')I'#v?fl*ˏ0\Ga(H>?{ǭdJX@ mb qجa4ȯ,yH#Y3HhdwX:d7o)pf4.g֬/_޼vv<1a%K,ap\:GK_'3 QiG:fí:=pGgGl$??~~Tyf]w~_=:ѥ/DKy~Kwx\ZpA/sunj{fW5Ftk:Sgc2u3>lq8?4u /\C<6z'/?U.hJ7ڂxrmd`a#}b}~[^t?oxwsvŴ.ɘ^wݳY+W(w:soc20}cW=$p:|Er)P&:yO1;]i!_d O4, pW?sK!#KBm 0H:X6WzD\i6Ȣގhu'X(h= I(7 !_ I0TN"l.{ygJt!B86ٻc]pC 1Ȉ:7Rֹ&y&K>m[Jl6m]ov# q$*a5<\+/o?E)5:KA8MGbP?`Paźs}gKzEA.XOunNbks(yD6t&!blIzSꆰd|."ɤD%0*Ȑ׬΍٨b(=Jkr"J42R^ͺs;bL?J^7Ӗԭㄡv]l繷?>t¦U5J-&nlhrԅ@D?yE56Umb 7Rjgq7sg~[*Z R~;G7\s}s'_7P0%/vgMY({#l3k5if|LO% "k$g֠ۢ , Mr)?9ǜ_ t D1HrւIKb=,CcjeYw\5cG͢kVG5=ly>lf]u=z:[[ߢAF]{&I<.ڬp1b޾͵yw2ص^ ?mnZ?VϞ=y~-^s{!Os bnQ˗'xE|E+gnxC{ˣS͘gE†ptnUKSm{qCsySu_=g: wP~U*A?6&Cj]u&gb~]eoICo=:c{rhA"}{*E-2i)L*jsB(t/Q&S,h]#0AXUHF@%&inLGڦ0hl.U"'Kcެ;.؋WZzÛjZq2gHzenxG8ԫfYZ%1_}$jH)4N޼:Mwxt2i^rJe$c 򣃜dR"4ϑrޱWG{5YClPTSSfWR,MLMV>XE ..5O=Y[?!I1$YU&f526&Pz %P4!hk\5ცyc (-5ܸXY?I<Yi#^56f٢L0-͓~ҽ(j)UVrd cnnt9*E-v!׉:oZ^)$d-2U 'D6$hz2E0J@bNP>''RZ Tkdl֝ۑWiF]PXJʌׄn010??%,;9b/JbA:qAVeMAÏЈJ yr_V k;ؠ, "Ccn֝ǣ"1jr@nWTMUjcJ*YȚ,!9ό{k9aGBҰBff( h3֤,ȩ\p$fcc. 6wC v'l )+BHGV%&?3P, TTz25F^Y݃;峨7f2:1 !T68D0a=^!ᐟ󳇺@8t2i)%ަKY*^:jD0 cR0 hQ0$PESV{UmUwp ><[ȔYUCjQ5j^؝fHNr;_!EN;zvniOIoNF%JE3ӽ{zQod󵯁ݕ]Q"kJvGt&9T#VmݡcP' Hkv m0"6K[+61'Q š0q+8c,)Ae* k]TyKͺs;%BD([xGX7p)?'4? > y#Q Ngv ̢fcvG&$٨}"%y B:V} )**8jI55&1n2)KTrVXόQ aZ"gqC\IƇ"fgy>\?_y*|uH`uO )1D69<:"~QLXY堔Q۵ϷZԥ_~ƼZUp~>\MnJr73\,3n#iW3ޕvޠoSwLTc%bDhN\ҩW)OJNwGݛьV,kz'/և-b ~;O<@Eʐ,x#J$zHLPFPd譖.jr5a"Gp}wxcy `5?ޟȷhw"ߡO J,A<˚Qh,t*)k$\YH"<"Ա0  YdO@IA#G;( bО|DUa=ЎA+};e&OG[l ;(Q-SS[1 I@SmΘ#M!2HᒋZĘ,P,bpVxxr~zԐD1B^fURb{q!(v@VO'uܿZu;=]GgsZGF'oEK!{WƑe. l,^CSbD  _4$E6%ZrHNGUPD <.(@"\3/142Y\P;Lٸ~iXYסE ?r A(f׳8 3B<1pn𠜤g.I(2bViù8Z, dV+$5$padbV'L}Z3wNrJd aF7fZM΀Mφ˗׽?fb|{.Ö}.?ͻػ_^tx Q:> uƣb;xc:EBbzWw4x`88ܴbbrU\Ft?۩ Wt{_raE5Bw?&мxNA3j[*mfQ7CmpUlūWş*NVUBIB 0_^x3'K7ɵ [oΫL99g?+^SDI)"kJ"K$w)L0)׵bV]6>D3okk/*a}A PQB~ctshEW&q8lpfڛV~lp4_o8_Mf0ކ4XW6ͷy6ZSk^fwš濿CT zamڑ&kۜەe:;Z =wӛ)*ku9nR4HlNqAW~w!/<'_[frp8oxBm-sYT:o)M/gt8bQOWx7QW:{fɖ>os{ipN4<򐚢#|- ԴN[/:6~7mzL&UWIw|CsuN^S̊1 "kJcR:Dh Xo5 d-E(2Z.''UĞWQ _9ts%C\&DEs^00C DLڀtR&\)D.^'v=m[ LGbe^ړh꧉fB!Գi.(zWqXLž:3(l?+~ qS ‡b8wtUuiaބᠼy3̀`>t3 b:*: 6|2lBɹ`1X902d9UY돚Z▱奿طw&([L Ft~3'0Z|$75YwId@.n*0o*U}K[͓9㊋n`3Z quwUDz|i{yV/O@> ExH7^eƕQ+WLlENGh&>h;v&Y[FH ֗ZrV:AfA,69&!kc`\n ҥVæTXA,v{;K`RGTRJmɌe2IY7;'wAeQReiTbd@-Hz^-d0gESA?e /P_V,a>)<.ls3yqI*DL/?w]%¹^@ڀ]X14*a0JQFGc@6@Q0OH$yL/mņy_+|62Na ~VME}Z/꯼o^R߼(@IMVoCVL0q0Ja OPbԹ|QS${rз>ooqY#/#dAPJ(|(ǔzu駻'~:pTbk2Y%H F \(XQ]bn+0ݥnH9g6h|VE#VhΚ3^͋kr+~{D?Il*G% lRY#H:x vhk-R+:6 iD刊!*:d2spl{qs<=Nc_]= ^o+ M,cJ㮖]`Bʽ;n6e(^BZ|k E9 n?4z=j*}%m(tUShCuV߆E |߶: ? Z?o}?=, 6`P4?tj @Vۖ+Zmiqo`xIa?&zFL0l[į>)ԣ3dofԠYN¿n]'8qFV5cT1P(QKiJĨ6xΨjgAK/Ooa<sT'Ra$GXs.^f w 닷җMLюr B($<3cĜ2WL#,8ZI}g.c.KqR^LzjWISo}=K-L˵bnmpwPƒ?_@kHo(=WRѰw&9tn+Egݭm[_حQATiMevH*8YҰH1(KfaX &p;uz'GE"cFH"`+(" iPtF#BY|(Pʍ"VFud qGkZ D h̝1p_w.$3 KׄһOʔIⱯ}t>pY d]λࣃO1xVMyy 3DhTHCKK)FD«ֺu|t`v#,W*p_QAu̴ľ'L2_`|k GQ r$]}ˤݪTyZ\(s|/&|gӵN\5&xZ`:Tuݯx5E8OV&s)ڢA=ߡ-%UMf>8ر^j]ҭeo r&Ci]92I^Zzb*7WwN˺u޻Un];EjN9{^iRrO_&F ߶{6o ׽ec:^xʬOϛn#[~]k7nㆧڊ:kY9ZdV#Oo3DZJ3,;J&)`t3uaHY=UZ$${8,5z$`"W'ZjYfhreqc*h%{gq,70#4'(xAcfYDcz3k7Ut:'0u>8Gf;1^}"*"<;^} yn^2ii5r;g[8ι:Xu\{Fe/DID:J0" G(11x2HGSL&)D ῡ,ۿ*hTGNDIӑἡQQD:*M!"N2;]BꖐrbaμwIﱎj1.!nSs#^G8Q m4 . MSXKF$Vi6L&!IN>UJPa_hmUüֹD&>яp(b'JⳤhX+R0E7F3 &xQg*U<۲oaP& -,TX;\j+X7ʳӴB9MX,V科g&+e:~IJ/՜߿> ZOas;Ղ.ߚC=*0eh1Isg09Ra$1vTO5VSڔPMl`d|1s(84kK.)H`?ϦufS.wʗ M= 'tE6vY!PjsFjuy16&"^ |z8W湨{ҵ01hRӳ W̒&p6jmT_nqf)=~}xUoʥWsH N*k^\ I9lJW2X伖!׵Mͨb[*`L(۟ޜ~M?߼:Ƿ?)&?_G^4у0[]ݻ&yp5rүrn_}B`"HwQ,'G N@3LYk ) )( V*`1F;1 kAyAN 9s's۞t^ꐧA QTaXK 'xCsiӉ]g\_;_};OMgeag :|q\w,y9<":3\nNMM;ǐGt}řcy<y9x8-H)ڃ8a6b<^ @`M&:87 #$ W2R[10}J!,E N&x;} vQ$ Ii-2>aREH3xUZ1N6n{mI{u=T,fp ,w %VxBE/*r4?EVGi2 ^ZoN'0p\ZRG/V:;J#9:ʎN‡ಹ; wt5- VrqS܌㦆ۺ+s V!}GeaF{y'dE!N q 2`'AhyYvV\*c$dOI ot٠S/8Lu|Q6VX|XC7EO)Ҧ޴JB"=wsq8rtE E`z)]Q5|*=s9%;VspV@98NFcS9q Y ! |k@(zm l^5r) p*YXEd{"!3/,u3r*Cr$Y&|RL^9e*`-.c;UP18 ȹKdXA[R.H!cU@gùNr} F6Ni vxt[VރG9bw96 ?LЇi%PgtQ.ez>?ao>2i/p68%VT3q5y%!i!i&uJ`0{u ĈePVYkr縏g Ѫ<6i,a)p(#BXYL^jʈت1h#11bgQ6>ADt} Hk=`Jjw ޛ׾3w](iL9v$BDSjV)nA1J418c1&uKSd[AI?M4!))Ŝ(a:D@]jl'DK2ymJwhˋ0zҤ{HEiSfe䖃dWɄJvхdVv ZˤPڱ$JhDέ.(s|ziYr^UvYL٬aEK {~MfB|n~{TOj ˅G% b!;- g@Pb 9} 7y.'ÙtB+ HqH=H2$Z0BQHL:pGi/[cxN$R ǘr"CN+2:eRCD9=h:vEo4];w6( 4+ y7-4w@ʹ #Ҽ.?1@'>)! x6 )$ / ExE2gV txFFs<_qXjL^#UN@@O@@l w>*y2x}lvzY8̜S=;,RD 4Î8K@ښk%-g$Wn$sΨ~ozZl]G!Qkɮw*Av`≀Jaq(S ù8Z, xVb(&IRK I,iu`t# 1 "3Z 9-|f?7w)io49;~_!lNԿ~3 `Z]7}zN߾[0kɄPNqQVOq #Pg BpGI2d51gb\7Ҵ sOFӑ So´nG;E6\CE11Pz^ F!?$utYAu:chJ/Bԕp9d$]_Ngg/_fq8^[3#T 9 %O~+Ű}3$ףڳ'~o_~d%6Z=)%iIPb&/gﮟ7a [׵HlˮeM k{ua! }A H٥iѠ߁I~jM hgR^bZUx" kt[#NMm/Zk3%r4~K;݃p%݊aj#g>W3'+{6=S+BWq1Nm5Buނ[R =)7ĔՃ*G1ڥ["gۃ$o{Cn|W&4 #W{> }i] ^8$TWp^!uYYd[~L{pcOxWlmެx3ohzCLۢ ڰ@m7TLw4un۶NϪ&H\mh.M|V HBqW.&}b-aJG1`%P,gabYI/Nqe{g^E*ȴց{3`<0y$bK<(r Ffz TQP}tWn^1${q7=lͯZ"x8;+*ᤃk՜BFҩq0E+f1rkZ5߻_bŸ9.6&)Ty J)>ڀ1qBXFe} ”!D!e!x0k @Oe4ωhj4LBZ": gK[Y>[wCx)9apΰ1t0\bEA [DtIUOX,vGE3OCv$*Ű[)rjgùP٤ 4L=0W"vOBJ}@r:M5-Vؓڒ6ynޠ]T?wʕwV^Rtn+)DhωT@,G*ZC@H1rKMw&93Tל*v}y!>' #Js2ޠl kk韪Mo*u&aQMp͢Ēxڌ dC0Lh  WVN|-sbyIE$_iEW hz?rH3h5ԩ`ؙ8{vN!DbwWyޯʊ({! &[)aO6nLj6703{I;iNtKEToVQm&xV2^x"̣$ig%Z sY :i$rEZ4HDk20C#(zb:hCv6aԷQ0n E >֥c(qKo0r+ ١A,lȝ$b B4I "1fIep\GhƘtc$>IT":SWA iBR|D%MvޅHtqƏo3(Y.-] 5p-dwP:: a<&$DI@$xƔ~K.n = :֥czף@nI$z5qIUdt]F+W#sVঀTEw@_Wj#ל=FҌD9]JSKc ZDky$R4a o7*FsH*&f{sխ anL⌺r9|qKiA6y#j&ՁisN)9:(vdeDӸENp:OBN։rrDXuk[%FZ&uoRlut2Ӷ>5;l=Z.iFrs"RGtF2J.g u5mzsu_K3K,;3w+hNlo<`i8|y4NG@i_n 3+-,pevN$`z<:;$,5(@է'ǃ|u9/X̋khe &ŮU5u~ 6?7ɏT)h;O)Cá/ 2 ih5z=yoO&e'雿7^]Mo ߜ ڻOT^& ISLD*j{_8zK"u2I5 g-P?1.(Iζ@"$} 4,qށVNYgT%{F^:uY"EwS1)9OVTQJT{U Ή h==K\tv2e}f2*xSuG?'aNmTN>ηwVWϭ(L r0:l1N}q -0^6)O.~IiT$˽4̈́Qn s%\=֣.BPZbգ AV+*֬ig'O*+y1Ryc9Bz<zixƁ@Ҍ I#Z0-,蔇wi)yqT;uoDV^:ȣ՞yF g6-H͉7Zk.?DQb0, Q%T'DK"l.s!wH׃RB)L(2e!(%.0'"lD*+u~fgYM/2]3| L5#˯<5d](6e%.*q%Zᗋ;*Lq*293/%"σ*7Ё1s F#@n඘涶M` ol;rIGxN :k#0JJgX HFe`3OrrVO #w㴌8¡If.;,eLDɂYgYwf~};.Fi 0G{9^&dp0%UŒFIn3@BU)lQKP@I\+R0.#&[rK-QyŕȤP0 VNԂVOI3d^3gY!H87,#&0" d l G'cI:o?d&.t׎@GaA BK@b s)g֐Tb[w r?:)|ckAWQP唌u*|gg"r:gI&KʌYc&m ]AȺء;#AƅB,DM [$JBM6)}aX)<ؤMt)I18-{3Q`rX_+]B}Td5ކmʷPlP-x<!IR,(3h2ϊ.Ld,Ai:Bxb^Lȅ`I*KdtT M^hM[bd1Ys,u]XfFMR7ߤ!ttX.Έ|?CN3 `\.LLNKK!vcIE"0B EKj W}5Wfoujgϟ?f/}soMBx411}W K-b 8jIÌ'HzCiTa|Lk}_lɿ릆Mcsir֛齺6=:EӓL&u8Lb1㎔ba3.!!xGqpx6/-J&s>y?|?[/v@B6a8(Ɩdґ^<ѲaaX:(Nӭ>?1[Hc .& ZBBTO*ڣ$E?`5:aꆥLN^W?~OH7/=|Wo_<} wyx.)޲)pv)~ ;LпLʢg/UН Lu?;fzޒĽaj1DSb4)%md `/Ԧ9"'y(HZ5x,D xA9c'A]:6X)(΀YiE42G9sLs)Er:`ct d6(N*6FRIEf1訢+:1-"$rOՔV*A{1W5~>#6?OKVpfGqU< qGxtrZ=8vWiT/ۘr~qvOU-3ݭvߵ3BժǣA޴q=a*&IۣpLu7xODq1`kz,<lCeݸW1:!q8և'$٠jp YBKcqM-?uyQ,1ڣBwnEI.$c7IP 99}"-Hao.CDFpF{nsr,Yt+ĜN]dgHdt# B޼+}|m5Zs5~v@Mj zD>̹ڡ,A_taˏWӮհ}:7{=e:FU:!3:I;1>MLhP&RYHA;Ph{enVez_J:ҧplqٴlȄ:~\wu"uD>!ld(:e!Bod #DM ,gBMS l,4^+3 U(eQ$zf :CL%7RtgifYE.(ެ({\G,bb0ZIS(u9˵j8ަȘQ Bk źa"9d!JVI (v[ ;I؆|߸vom@ښćq$/§Ǔ"k%1jAQP΄OuƼaoc Kkm#I LCdw `tuuKl݄NA֮N> kQ}*՝U$)3X:B)'[̉AbI lJ$fi9J509UitU0F}K%ޏ-ɤ-=}tJTٴe\jN'qA^Ɠ4mfٜ4Yz|1 >Zkώ7ȍmTh lu6嘅^>YV>{h{ǂ!][`/ Na>HAk6@/|D>Iz>0Ɣ-ἵk/.硃J 2U(zPnXsFجBū{ O#*w=[uLt9qp6iS5 sh7mmr4]b&\͖V77Y*mHnzM>eCsin{ ،C1kvO]f-1 /bm nᅩnwOOs83j>;f;!lhdçx|;x̼6rkCd#[X꞉}͓<$ch.o9-YlyI'/JR FQ-~:ۺ󴽒Mr'euO[k|62Y:13S& dFGeyh(DJ:$Fݹ:iܪFZXi%g s%Әg$2́ʚX<.7ȝNy Ͱ 5R]3  È3)mc)7 h< &e6yr)ܗ:*_t %@1dF%YL8` YR Jdh1ѽ2IguS4N M~ZIt\fdZ(Ici,Lk2kNwP ZB+SU𤪖V_30ی 3^ZHl+kM^h}15lI\BHn~~NV.Z7||CR3/s5FDǭF ][{E噹L|@$!٪L)qA:yd ygIߤ}=PbT1;^DRHBfJnǵT)"<U)Ԛ )u҇,k }&U6J(Xs.f$k,vwDQ(8m7LI2$XSzޗ/L*kWbA79Nf<*,j5A* De1Hsl$hEUVڅjXitsŹBGʹ%mrnWp5fKM & .+Ժ;`|ok6/]#/O=TeYmf[ q^5>b@Psg~v{I+8}.{ÖV,|{rf+ T_zryfٲuBPZ1 y67>."\TƿDe4Lq 4/m&aM?Rܛ~Iҽ'(L8OI\cYOSn/ӵrNpNNy}e'zM2&yTyw͘`92Mه/^l}cia-66*׸R#BYZYp`ԖoatUB{\ aK??[^AZV̓-s%\K{Fzp%2в6 VH^\|ƾdT79{_<X|e(qjaMhH1\gmt6z_웏~!-0.DA}a[w WF8!p¹K/W2E4 Z[FZ6)X)ma?QAQJvҶB.| q[?[&azh|B|M=p箾a\32zVHvz;] x@6b3/iћlPk:V%fK萐9aM@e!Hf, 2qOO,H}{QE*|sKp2Ŵ%K*6(bQI i}](e+EsRe׫Qe囹=N+P,S%KNK9I̒u"mA+A"mdyY.Jn Fu%Ŭ'7ǤYP)c_)+ΕZb_QŤK4MyJ4q49+fVҟY.VbdD\ʾN Q HݳP9* *TзfO@, I4=It9`nd KPL U-g?2*հd7{]f9؞&/.O;E> ˠ?}㈭Vm!hύD\9CxI [vo n_'Pr- lJcQ`kGɶc Ȓ&ǔYv_Zُ0⚉y*]M:EmQy]_FAjg%d+2|d1C٣hZ#uwgml4Wd!3䊬(%ĢIh8F`Ȋ&Be Q[S)4N8S'&r49ul"a=xq7Cz8F *1 %&2`s>3BDN @d*7F+Bqw.HqSD:+-!y/ow|>sWKNnꄺk{W!Y@EQ fl 1M4}1'A yT5vr5f2!wsMBKS5.,qvǃf-F|ۏՅq1qass('g͗ [|mnt{ ?XkE⹕~]j֋R<$;hYk.jy7'ɰlzj_NlFU> i^EeWǓG˓j-}VZeDY.VyAݦЭV@nU`Tr ^3ƈb [dA@BE2loqiՋVz|m翯nF{\])Bv]sZ=wٸ^]YX$*k)<{ZtNf* bףXVi_o )ۦ`[ #0[K YJ%dz.%K 'zAk -A(%Kg)!\J¥p)!\J¥p)!\J?  Oq\J¥p)!\J¥#z $RBxKłR.%K RB.%K RB.-KRBؗp)!\K RB.6Jpq|qȗ¥p)!\Jbb:1 |m96C%˗WWx41B}: &!rqy̎Pv57-% *p% 1*lwZ`Q+Ojwް/M֌݌6Xl7҉%bRrk LqGy2*ϳҰd:QmS MS^ iAINy4.jg1(XCrX8];@:НKir#켢PSsGFRdW4 }߾Mg7.L3W:kUe|6-B a\¤>=HhIGs Kx8Xu֐,ckb !&IE,Er.$Xc1*1Э$YgT533LG {Qi)]1u֝-嫿::PmX@PҽÃ<8T81@$7Q*yila.YcBN3#0T V$.e&$R(h**g C4CA'b';iɼw'/o]KoDERZKwQP,0mx 9Ept"8x2RllZkUhLBYη^fmη;9{R.{߭ x=>Fh %RZ&i,:$/[=iJDTf*֝G6Dƅnv"2t|0gm8AE#`OIQM.4jH/!B=`](!:&+Von(Ι#uH=4u{WvҍY;-i;s? y".ٯ%Hk!{KD^)cJJgӴUmri7˓H@12$""XJ^;h\DKG4=`XoXt9m&7sP)c$pP8q6m""ʀ]1   5*;[VYJ]r“ d7,˙7)*3' E=P (B1K ŜHARB*Wh/~0mjj 7sBA`!iC!J.2`K.?{qQ4 ŬnpmEZ\cw: $Y:)0bъΫ?3`ɞE5}-մ#1a)#2|<)fk&s%y1H""& 9?Fꖑns ꩀ<*jriRJ0%}~L-~*"X1mi 5J?Zif.0oy˞xR>b&}? A=4.wZK{4v۽7k{\eif!Q@vF Ƥw̄ {i|5'({q`o䍣H'>v2 NH{GsE8!YY_m|pf/J꥚5eɥ{~/ HKʁ,6NQWG'PP3c?]5ں~yx1=y?r Wa斛ӹ]q C^9%7b}KǛalc31ᨕQ8y& _ݜ?\686.ՍcUBeMp<H>MCȧ ʯTeO>j4ۑtZKӾwO/~Gx"( TϪ(0Ta'hr<ەo*?:$pP%R?gCg+G&Ƿ^~Uū× p]Nu6u[wu@ܢO7HWMc{Qo4t9oӮrv Y?ш!n61_(!̬WmKm+\mN+109KcNg NI$iKT$J5^^"cm 4*vv8,8Yv<ԐĕqOgjeLID8J0$e "xK P:uv*3x$7<4s ;k`UIݹD’MΩpyXIj}GleF{~%9x֎=48; 2rpާں[^u rUI?e7Bѧ:C?U!~o9@q?j6G p)FѰ>A~eLP1?L6[? FC\#JyW c7 |X15%.8I'68) *&'Y.䇶`7|(ttfmǥ0T}6\+[Ⲱ&Q=i45Ǖ%o~/?eC?תYy?=i&{"5%WBTRS>w3_~D hVouC9V~ J$dŖnK~K _mc,w̢`ѭ0pzYnvJ縜2pmqֵiBw]BN`&МF>$}ޡlOxAUZ.e٥XeJ@l.ۄAaQS :ƐZ4nx[ GbS,ǜf:%b 0I2,%Έ"p_7櫒<4FG9D%ٻ!]Doh49XC)Gs< ]xlxlgib;q>c˰#@='Z)ge?]R$Zk W۪~ eR3` ŀtMPȐ(1G"fSnp&λfoGxwL774* {v7OoM_~;[|@H u3f5c`DPHm(w BXYӒ҄1@\ONxHZ .X6&P$i>C5GB4QD Ұ`REX@YZt2Jc~l8 *IBkDT,VHlq&i yątH&9{w\<0ݨz;sdi&cOj&ף.zS[xz<ԨE_]rk,QurKg7Kw/fo17 .I;STvVGxKǹg7U>WLfpȯGǒb>NFע`V`Z,̥}jU٣W={D}ŃW+O8U5Ka]U/VՐ7.WL 0ƗQi\.ٕ۠ο19+YG<.Ojn{~9?nr/7Y?G5M5sfvkۮr-:-n>--&ywאu`ο^>t;,VYka*U}Cǭ;nmݭZdӫ6|}ܲZv;_W?獖a>{>¾Տx5^CzǶi^eC﷬t-1~sKE7`lY[C]x\Vڞh $[ɚz7st ܴ3՝,i茗CglG. u";, KnDZ,0~MN(¤b:G)B'e2ł@z-bmJT!Ieg!WKibd.5h V.ܚ gp 7V˻jZqbHzenMS_hCԛfOIb4| )Viipg8{gx>ټ{)뒌3ȏrH-R^GG^WwXtHd] 1@RqNO]I41jc3e4Y]b5l--}L?a[/ȯIZdm.H6YU&f526&Pz %)hBNx׸(s6UDHPe[be}rT$)ogt{Q 瀢ozaZI4s+;WԛS~͡y|ˎEDK{\HVy`w/ʔl MQ2OS 'D6$hz2E0J@bNP>''RZ Tkdl6؎4fq_,>+,}xˁ&<[l]~lئxU OzF+4;lJKQ+}$,P(e ~ UŤFTZUΓ3cUH ɒ P.<4Ffُ8 y,:ڪ1j3hR5Y_%8T K5 6a)YBr*&s7b`aPVѤgIYSY9&SIT>xl8GT+0M9Vq_DԍQ8 N喑^eA5[y6QeDK g dmP H!73 yɊҨRq^g b%KC uf9 iǴJ1.\ܙ/n"`FVPpBb"2AƻZ/%2BkÀǂͬxh㡽>5<,c9'~}cSFIëɫsӏ̃iv@ΔꛊRSP&"3Cf{G!xϱ|M^TB'\4:! g0rPEfaiNdڲ`e@FycEKcaQ *84 $ `q*Bcl8-o. # Rܕ؃-aY:L咈46TqXS!Wg̓6q&/wg\oa 㾮pD.eS}&x.,:ՅBgB(e:@tVAo=UVO{[Er I#Hy@IkerׄGe(4TbtH6#yC)m 6c2$Me2SS( Bbl>BVY<*#ȡ(̩$! B8g6BJȒOX)fR9:{w\8f” T`~pp#/hOGY6B2e)VJy0:Y,d){[S *R((djս?=tJkI(bPS1 2 ]Zn66 %RD[oiw uvp?gwpN|"t$8P'>0iN}nb*fgq;oB򎑍PNg!(KAΰBPeOQU4PH8YS.8IYҤzSw3 FpI%x7]dGoUٻK_^?m_Οy.|o.f|o'B$K Vs{zu4I#E;|_XY堔Q۵oMюR2>VDT>|MgVF>ң&'ҪD##z͆߉XŢ5z,miTo6h3y0}wm]wc#fY2[D50XҮw t*S}DYVMbЩk]+ql69OJ.FŤko8gyz{0e_.Z3h]HIQs=$r&(#(PVT.je4aRbL?Ƹǯvl.2OZǷx"ߡO J,Z/Zh,t*)k$\YH"<"T[^ 0<0HV2 BF,v6QR 'V VMV&xkyv<ϼܞx~[ӷ{뱃[feD|[s[|dAT5O9cv4 hD K.jc|kC 3/O@>Bπ<(搂 PˬJJO^\SU>\T @vjÓ m6#dzz㢥Q.;-[9Pd15[8`O"y&9"&ⳇġ}"RZٍq/>F|W^7 -./qWYbTY3DkeAX`LhLXIBB0}4ndWR~qf'/qvrgwI:i |'lyO]wGoYG >bfO,|3H^(&ŽJrĎ&y_xgI⛜L'y4̴yx7{ͧ)M''4lT)@xa^H'Ayף~Xpяu&/rQiMW*Z~Bd|v::y՗Q8~aEԥ3AN!JyϓdNjyŨ-lߌ[.xj L5 b Fk.Qå7/^]3:tmtͼ6CVmR.l_ۘҨ,I/_~evB1ZjʖBчF.֟{kxRs3 c[,_g=;F\/~ ո8M˃Wu}f[k7 z}pB+g6s3ql |I-96 ":Yg*EYp!SnOT]7-bjbw98ϣ!lCuk/ cv.A,X%'=Z$KWdGݷd)z`Onqe- ,)np1?>_=MO@1MO`6]չ|uw^}Zs-\o|dʏt1_g&{\moIx}gڥŀns՝HWi"eBbiJCJĪRBe%ikλZeܟf}Nѷwh |+Qy'HLj*5WEŒ9L"+T:D d&bַG) O>|^Eѯ^@{'Uߦr=S%Yݫ/ΏO^]Vq^8?/m=\ycϴէ'2;YEt1N?H?r vx4}sۋ]{(p<;iO|dnoਞɊJyi8;8:$ɄᨼG_а?k9%X{ͺ\mքD ~gX/b.M=[.>-{C?]_|\r* >k1.>8 {[?2hW^Yk }zl뵛{mZ3nmx D{Թu'Gy 掹-#i:~^fS zی $5~Q<Ⱦ);)Ԩ7Ved,*IY-RI[iGi۽=J=O]֣e-b߷q_./;\V(A6[UߒD hUזMFFaz YLƪ\P)&jх](cZ"ېS**zEZ:QlWcQ4R\RR(OvJk]Vx'/CS^JfԭR :KʎZ:M_yqz"֌%)m]3BJᨨf*LR0URB0'ZD Lf cSJx38f3NYVI]C)Fa]qnh gF74Y!)!d*ҡaӮ MJÛTwLa03.MgOb15ZErm&  \B{ O}=?;^m,U6-mJ ^!Ec}Jk*rH&9j*K'jjdU9挑9QJr-^4kr-;nu}dSa4n D+6#8:BҲcF`4Vؔj)F(%ulaT[2KŁ $ԓŬs֞Dqˌ>$%UVش+!Kb& ˨0iJ!BQD \.%Ɇ0!^& F2UczC6rJc%'`BxZCۆz8FQML"Z X|dKJȠ-]̍@Kkcn5iXL5e :BfC5ehc]VП:ಮ>`ds0F!(MѰ*C,Zex@WwAQtU:-njwSO-Mnh+US%b5eXF*x f _>z00Aƫԁq@82,}p,}U2z(9Pmi>jZ.= p1)v1k/X ~a-!=%EPiDH2 !YiIQp¸0F@_H^|$/eڃ;76D n*^XdunR& :UϳW5W1p+EM yY'O7Vh2GUD|ҷK*a%VX(;Kb f R;ЋI$ ˽ګG.r!P-fP Pޘ lcJII 9د!ss@ȃ,L >^@|6ٕ;f,u惙3`0bduJ>&4#B wäDzXmڔRA4G>#ҍZ h'r9V3"XAR`T QDzE`͂z5z=6m6@+ AJ*UJ<*PjG9^{5"w^^0dR=-1Ԕ*Q=յ0QY`-й'`ҥST.q zcsjg7j1sqw rIXAR䳗ҭd&Us'k?uNq~5(JS>俹,7S@P  F98 Ǎ .L:JVyÀek΀x"=`eT}_HO,~)AȒ8AN{Nq1:㽱l,1e J˥>t1>蘄jɮxI8]HAWrW}&( q鼩=ˏp]K}wECQR o`j?mzomqy7-} k)̀KY,ٺ"Z|Dx6GJC lӈW3o)Xzxx~?7WXe޺:yY][ &a{^~ۖ=UXVk{R_U$HWF>'A#J?%W\}+=ΰW,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\= $_ʨW0Wg#䞼 VZɂQp) Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,bc HHLg$"gsU6ا.HUXp HvZƂ+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xp+գs\}>+k`'/R2Yp t_e Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,TpPVS^cOVU Pð-|ڝk-a}jƼ~Rnuq~|b0kϵw-gk.m݌8,/.qR8 yਞ`/ojqu0}Cop<;i$ڕؽ4lG$A?}A~aV\6 sZȽgq;Zfggk;Iv竷0= 8_k.41qbj9vǕӋ⾵D~(.3 t°o>oaMCqF攆+m_0-?䏼 ^M. 5YcȒ*JvҢ($[i[N ERwa\KrS6Yv=by ,k3tfMOF2{rv]db;Eok\`!+GQcͤ׷S[dɴ<8t)N ~ۼarGaRNF׎m3Rw#YLޅN%g_ɢů٬o">Qj9pX.|{FFE *m8OI ŸbhNъyradbV#ט| 3;h,`oM=? |jMW$K/./Wo+&̸H| 7E/f܅]Ϣi^Tc7ZFk.%,+fyJ{ș&ш[c-':-Mg\dWVOeނq8e_w߿8UjW']Go=:9eG[.Ýoe%yysErX e 7*y}ʆ U Ki 9#+KTjË208D/XKB J`NW$E kUm$pWQC/6ːvvU~Rʽtb[7h{{q 7=-~No[~oT$a *<JmTIFBZ|t[iekyn܅}WFm,-4RMhM]J%TJU؛EoZW.Jۿ*sfO~;5Ӿ/ YYydx.#daK)e:Ƹug{jP!kcgC*dWzFZ1ck-Z\ޤ噿KS_] ,T/_2G*\ěW髼TJJ͊feSKe_EA;i^xHk'Ƚ.Γ^mW?' > |q6hoJ%c <O 6av]6]'(\A3th,*Zڌ愖{ pbFV7iVR& 1# @GM,^X51b o37AǛ%3'<^eywM:X&o̞ZS]Ng53+&@ۚ0Oc¤*67$vSrofdֲ}L9p\(O`ʲ`1qV{t3@iZ+Io͸)3'5mI+y$YJg'Jlf?ulupoA={;Ng>ik+_ZG+FX΂dy:$"'1eZkq/y=xϱk |7AD dT F)P8bEQ:,sCpYuל ~4[ugĞKoӽ|Рq(m tr>{y?\YMl|FPS=i>-Sjk3IRY#Щ F93RƆs;oYIFZ{"54q͏4nf>n`MQ7eD%g~Feo}.o CRѰjFH^ n\6zWgOE 5 rABjtרĤF3 ͵E")H{ Z|?@<bN30.,2R45 )lb;c=u5qcƆs;$t"?|am敷To=zv|'GZDSs{'<eH=2zhn $o c `|`K| kϱl3Mm19B<SVF26"E9f3OhiLc""&N6n 66ۑ`d/WtV,s9ƻegT-׾Frgm Wѐh润˒b=[憒R\4KJF49JXqrZ׹q@jT˹qn~=`|JhQT! ƀR26FjEX}Ҡt0DgmX1i43D)59bHETB@Ѵa>kl8;YW}l61y]X@t k椶#i*-(=$A0etιUܛdυ+A~b1gi:/ OgF Ra4#E0ZA-7vRk'-AozOAN85GJw@tH9+(Nwt2s)ahG#bckҦdG4)m̢mOIg[rnj^vK*+FlftTN*X\ l 2jxpS0P<"0868kDe>0vH m014a!c[k3#닓{(?f] ;%rxoֆ{ixZi)ɬ)F"{`&(1E.lS脾Ϋ{ ,*ÀP& -,T8\xm5'X;+_?KTt:?>3 \)RT+rjD''Ũ[PZLѩ_t9omj u)sP!a|RCi2lxrMWyuum +sa>{>[[k[s ~N`0x֓[{b|sOۺ!ۻQdk7uby `X(XIO.}.z&^ $uKHo99Ͽc b<]pJFrJs S6Ki㫁W!\4^zI%"Ͽ c{E&֤lT*R3bo0ݏoϾ{ߥo_g{{:{ t1$Tf0~=ڽkw z~Y|~US○Qm—"Dl c~P0Z7Hjn'-%@n'yRRaREH3*K vddb Ar{RRsך<)($.;:僅*,ab_eh8d/3 \\b9(A8.]9_Ze{X6N%Y7T=EA\䳮D-D5JN-$$ lV*ڃ}߁I9 lH'Ŗ*k^Bmnİ,X5fw;O(ɘuŪk{*Ʌ_n88siJO;p E`z)] Q5|*=s9%k]̯ͥ C/SCHj'1DʩМ8L*aAaT>k(wmlR RYU4.E,HR.![(,39Byٽ,uG3k_+ddvYB?hLbks(qD6$L̃Cd6Ԁ02>AF{X*ң&SҪD#cj]ҫp+։ 5ԤnusVxM?NMJ9Nu.P~P6+eX: P !IYHԅ@JS*ۼTF{0lVIۗf=fP*|ި޼u^7g|Gw4 uhgbjƊ3BdKj`0C#G ^sd Ț6)0k`BҶVu"Z,R$i>E0>4L'k- H](EW+4Ƈ͆y$:1O% "kI-ΐ֠#.IdQh~@Jw mʯ5!X?ޤǓޞ1o:իl1OF}n<]nP,^usk?\H#|g agutߋ {? |8kkl;z`v~,t2&{* ^S9ا&)|Se-yezj_qi\/#%FkWQk7.WL 0ƗQaԧ=R0t0WG<Ӹ~'=Nc(wqi]mtƻݿpd9҇~8kk؟;͡]9tqwj(ueU?r۹$w}zCw>هΡ-,`7uD0T*c}Cǭ;nϿVdӫ64e-j=ۻW=_7ygyy@-^.}|=C<+ÎWtnoBr8VZӿlzK2u7<tnmfq= 6 ;Z;Tۦf=ʻ-m=Vi_^&-]v&2M:e%GFDM&v&fO5&7Q;$-Sut.!P&%bA?Rg gB*H`#I[ 9ZL[Dc\4D&Kc[?]Nq4JֳqnJ+W(xHa>f>4L `Vě fOWA(ڟ t[՚k_R1n6~5m?GBLu2z>:3yŰ2C12)B'3)mpaM2+uqXdFjqRLh :Q9(ol1h H9M>hX T4*GYb{1f90:N}qaPw9ǷyRKTʄ].ji 2QqX0ya M𨲖6[?k 38܁#t= ƃ{5uURdѩ. EvB|9𭚷xk,?߲( X )@BI҃JC5*y95_qPJ5dRjIJ ''Kd/x!,%(D(O$Q,))d}L "|"L1"9) !3b sEʢmS v34KHv@60"fak(LZt>c,}:/Diq@GO <<:ϖu,p6g{Ѥ:;FHt_H džD &i۲>{&0"U"@ZmA{ #6DZ13b-VU b3&%D  *ХuzfnkcD~hx9; SkV~+D(h&8@'> %CR~YΛ@ޱfK]( t30v٧$yUpH?{Iit(I٤"gg #8hST<]vЛb5}r/O-TuYm5{=p|J $!%X_K!DfVff9(rͣЎ2y3}iS=Nn#! 79ʕ 2p'V)7TDӅd*O]g1"ZetO!y=QTv>+}/OGVd=󷻝.tyl=TGzVϠ=/wHQs=Pr&(#Rd譖թ<"y]HSqcvf=6bTc'[J>ŕ7wi~jIϹ)^j/{?r:x֣ju yݬ wip_;nσpla#UONJnw՚Wk^}yJgƹU{Yk-$:YW,E?(KL*]/^xN;trc:h5tScur<0?Em;]7|!~xrsZ:{:6t{;bL`wQYrǍ.}߼vښ96vD `9 yd H4I -G钙dBQKtQj|A&i1d/cǼR/^"BfYOggǓE O BrJe:1tA&فԢH0(EBАe{trGZXtu1XQqNAuvȮP,M؜:Y] hr5%}\ <eJ MQ2G΍*y*Z*V=9A39tҚXXI֌͆_3*taq_](]tՈܖ0s..,Һjų駳b5)^zEXbA:[}롞!DfR)r)ze/d!5?@ÏHJzέ'gmU _ \DкG8?{WDV>ԾXBWe}՚~^\N8Kǎ\]U}ߩ:_21EjFJmҲƝ#`&zf4Za 6ĝ8j/$ TΞƎ\H☁ 3 d r#(`QAӲuF%Eֲ\d\mQ(P H+,mUiN@L )14Bs]}QǦ,fp"lEPTUȕ;=nNpcޏzMG}ȄD. RH@k)Bc̀s-_EdR+ˌ€{ 5ؽU7EĹߝ}oF |ݓqtc;]`cN[7ԍ%5h";nɵ:g:\re)F9T}x}6/aoqV`Ga"HU$e=$-aB0@"ѵ rE8؀& IX8O8ƥ<ש܅ȷwpִHFVh0۫-4vSPvHzlـ~ bA1 r=3Ĥ"dȸ8+Ѯ=ĭRi !',u!|RiBHrxEʍ^ywۨΣp  ~1 ՙ%E'*! j%^x˽΃Ԍ!)I)aˎYv,R>MfwYCt8hPoE0f 1$zΈ,3k?N)`JkRNT4&zY#R 5uJh CcJ#6I'g5?HysK^Mx}Fݶ\HpN1NnaPGTY 3y⿜ւa5KhrBG\bvY"in l0-@"/aN(Esfͭ$G![:uhSHD D ܖ9AA"$jҭ} h Y vATD}€|<0ÐK \k @HE/B8"s̓t`/IBbŸEf}bڋqN n =:G\t2U%<9aB*2yЛdljޡJ벲7>i[#)0&L)1yjdQqʱu/P`>M@Bu?3'X”THr6 xyYw~@?g9vA&\]/N]'EgѾEQ-[Uf ̷X2q|Xzh>6;}ՉZx[j ڷ˅G% b!;4a0%89 @(ͅvTȜ.MJa̺/x~bt5 v_nw_mwF+ HqbߑRe?D+0F(  i@XN"D"$Zȸ `%PcL9nUڈ⡂{SP*Y4EhJkeoh=L׎HQsJ?Oɻ5oNRw7Vhq{7nfÈn^=]޳'>RP! @0Xc$09ܒ95ZTI]:Mf~52󤉇g&|yWzKauMU{+ȎϞ>;q8^ҎF\rbJ  !=|dTÓΒkOW n׉''ߗ ybD e(+ɏKa 뻽0m]yy-m|f>Xx_:5C_ܬ>dW e'DO6765~fɯ aiVWyVS  ow',i5.h 4A&79)b79!7)ҮY)twZ>ۼ'jZE4XT_Iv/ 5{ hɟ(~<ڇgg^jgOƳW~KmZ_15 k(ƳT o)HFR']]ΩtK^ڒE ~b)%C\*͂<(r Ffz TQp}tO};B/Q'^m66q9^yKy6".{YsڸHХ1i5qZOq62&x0?{:_AH`"YX:<(b,F˯{RQHy :DžƤreQtG032\:NcL˫鹷D2(:o;cmp0鵌F9MIHK.p֌.Ud 8KװL((N42 R AΰGpII"0)Xq! ]QQ̓Ϙ]X|MJuR"ڮspV3j8 otbR}R[RC;M ʭqhrVdUh7v詰5i7^P`*ȃVTDRNQ"D*`HX Y VAiBZ%| N9ťڦybdF/cL!նdlm8%c{X5Yel* q' ,\(m6OȸaY&.->=͋_6`02Okm#GJpg06dlv2}d1Y$mmdcy+mV*Xn5zo\blq%ڕW$1lJr+ *ڨ ̊{> &K[)EaDɶc$%&c ݠtaĬc%Y+jӹ_vF<ˢ}c[(+KDK^"iK{*γK& d%$}Ψ %̃Fǘ%xmaJlƘ)=L"㐍dQ* 1+ ["YBRJ*Kjӹ_"v} 8l+UezŻ(^#bLjK!CDK'@ 515rqrq_a5VPo'w 5{" i]Ip}e? '-8)qG̷}]c2 e0Bf2ʃiˆ$Q(e IљiUR+ imZMݛiUf]Ъă8GBt 1y!hOa.ʈ 3t6&Mb|kH o-rLo']Jѧad|Hp\t@l hR+6o"wR{S{wѻh +s_UnYi9qݳ=/55fWA#9WFy %6}okP EAʶGZX}&w3}:g5]lKG.BPJףAY^b.';fFﯫ>ԝ\ܾA3oЬnn]'.$1Qg$jqTGN>1#cF.H'A`>hb.֠Ւ䑓SUh{L;s inˡYu:(Qih;s\c|NlhWUȠ :@.HlreM9o֣yǀ}2ʖfc(5FO8fJO sZ2w 7Z8+suŷ)o^'}RXNkYηzln CzYn{74'(đ[jUR6=#zUy$呒%!8XJ q`FxkJe xI(B^ sChHUV-:'N ׽}mnt{ ҧ{ײKGjͅڣ"Z-@VdEQC880,k,vjӹߝG&|aexf=_ۣE =") >z@SS:$fcK,S앵-Y%#[O=0i q=ROgE3vm96@w&5&VH0 0JCfdQsdQ,YNU b+O/KR&` Hs +ح6  &[+vDeZ,X&:tny`wֺ q}uh*4I trM+7$%#{yZDG9"%*Ý.9!D:1ؐBD]Ϲ8v8Vώs*Rhٜr)9_DF%=@H)7)XWCzta9qJDH3 .Y LDSeJ2dde>6 }l.B_6aE){Ph!hdN@ 'ŨJOfĘ{ ?ZuqR#C&)r@*[ L C P‹ao'v҂yݰgh8hlFZLHH,ш(􂣊˳})/_&*/O2jߏ7ڤڱBH! +١Ҩ$u)O Ir.Be>؞yDhn~oEM/j$-7uHT.>"dE uΎyB#Ǩq{ 0_]kQȶܡ+sX7mKv޻e;,¥kN: QN }%h[=dU?+%KmQ1^6Z>vM:w| (`!)]0gE9*KTڥhJBD@C!iOhO?Ql}aja~4'5]SǛp9ƩrTlOqhJy"Ig]0䌞'0iۮ璉b7_nMٕӓLinV(~ FrG@q0Lcb&zQx't'$OٹoGWDY!p% V3~nm4(ܼY>f>twSs;A2[%Z٧f$ѓ'GgGˇZA*(qho=bpӟ+NHWPe4מivu3MijˋفldUz1Sq0tm%\풃HmEW-wFOcF2rHJt0b0NfX^#4w2Vb|'WcNw=kGQnԵjuQ'ehHXu|֗_)#~țtKʛ{`o_/?>ſ &8ßA\\WlAQEG4FΆ(_t0uǣԿOq0nppvBLro޾x//>{-zg \$,nM/o_65|hj M͇&=ЪS|q]G^3_o ly4v" c(QCE-/wHjZo';-MCOy[\Iݹr9^)G$ kMF* >cK iKx%g˹|If}Fo*'emǂIEf1P +zcZ#+ oB"Af}~jfJMIb> ~rҴ%Y*Ji9pBڡGӎ.a2iڗ~4>fr6>h6k8 ͛l⸙ S:kxvw@ubL%ZH˺a hK3MƅYE v_nt[֙29#YO)F Np1~݄{2ͦk_; B7{~r~D7 A7%{6yooq v! bߣl)uq2 $Wnrʒ^=^Y[t>{_"w6$fwש d?ŶDC KhǟZUt&ths{Y;%,knݿ]{e;OR{r7?k^yRWqo!w;9[<]SWyH'h F@-~Z:$۪= Ěe㳬d?wLa,̀uK*Ħ ϨRz[[r;[cEK.7;Y)FGR *5z$`"WڥlE&x.7 +7‭y {a>NTVT2㺼XJ|߲,Xȇ+dR9J-ch&뤌 t2\eS2fҥV:F [&{J#rX voGzO_(9wvc/ 7 Z1KTk) 6=g` i K= t#R. 7Pk٨/5`k U cJ#68R&FZ{1[,} *۰/1r^H=s8|;TWmδ!c\˂VlUr|rtđUUEG4Ϥ4=0.Yo/D^rD) ,)7)SuהuY[HtHD D ya#2 p3K$bb r:9e*ĐCHZ F!JqTQ&Q#Ȣ`;X+\$6D hlocKcD0HuH;^N3G{XX g6+<!18Y߼ [zfJɴ$+u}h%VT%q5e%[Ԇ !a,,;q ˠR'G;}`8#LVEy! rH# be}0z)#`0+RDƈg̱5>~#^ѻn0p:?mP%6>;åvYC[m#J)1yj$X3#X9c"lS;5iꜼǖżQ.dz;ä{:ҥ.^Yڮdr&(ux܀Ork~vk!Uzb6Ȱ^ }̓M/)LA ~Usм].8(Q0 T C΀(9FmDm.ԥdTԶvY)LJun_>.e~V_.JP* VLXFJDT$hS$aZp/!rƭ`h# Lp1}3m*Vd88}jA7aĝ0]$Qk[}.zw}~pGo~,:pNtǣ|o;`O:EBw:ť~A )ŜN~a,p:t^ay;u#<vu jmw `E;!9|V6|?ŏI̦&򦊖J/fQ7C0ӏ.٬gϊO*N)ITX'nS4'~Shϖ/A&o/~_%~|J<|b$BE%d?l%< nx^%gY"в2<YSoOQ"'CF.4)N;q:pkw&hM/ç7(NzˆnY˃?Y;X7-n%0H NX]QUOs^E<8POMOf5{ѥjiR h:ԠvryoK}?~|$&*lL?&OP1iW{3o&Ghx0 jWHm IoP.}rItӋ_]ʒLZNjkړ@=r$ko󲇕_S5ߧ4J;a~lriNu ]^> S k Lg+45!pB|Co+ӕ+z:TTH0l]B,D!x>BwubF.eQ|4iIy;5Wl%p_ BvV6ˤp1?a-bIZߤOl O+ta74g<^wrfo3 7 iY.ɏW-RXUH?w i:r<-Ӯj\=tqjup  ߖ ګe4A{LkRI],2a{㞥]|14_ '"pnC v!CJh ^-ba< "^FKt]ɑo/ľsGߥ#$(RiDEs^0a@ )?oF%t񢜄i/E5m7|hoyY#}_p@q˪o32BVj(DpB7uyqM<ʤ|o/{kp" L]2 SǥEȵ5@#)"$VX0ژt\Y};ĎӘay== x$ )[X2\Lzz-#dNFSa0m=Oޟ?28,H%5k8R%V,/IJtIuOX,`DgbƔ8]o#7WNkˇ 9F@HkYr$?9~/r4X3#ufǯ*0B*imӤ0l]Vw+j:wݻhǷm]ZMNvJ稈%̃1C6Iz[)clЉ􏻬 'D.80D[A472ɒ`J*"DպM>.N&%"TEq$^EnXT:ؤXt .A*e[pRma5Ue|oIS:nY&f*˫p2C]ǟ~m'wIeLе8/oGW%5 ʸa>#x[+Y=~A3ΈBHR+mb[D֠:sau,`,7,GjW,7hZԊb>1 i4cL8&,ִ:dl:x%:W;@=ܜ|PB{|,"ѱ))cΙR8l,uSW* S F;{5E{~EYku1᯶V3|]ź{uviՍYރ):Q ZΡkeZ!,Jo#Uh]ԡbomؗRe;d 8rcEv;8.t @[XlcUtRYn/J0⠤{R%I(WF]R /UցEduI˘֡lRyhڨmMպ{vr[r]YJhJYK/@A  5bfR)ɥ ,mre2NDbw4Z+ʥyr4fȊsicy&( sX6~CX{AqZ9Aٴ&ߵY(!AZk'5qT}s ; 47'հqfp|\)XGy<ξM^>ˇg Y5OL)FL&WFk4JeApHs D qsȌ+D4x—x\> "jʇ Fe$ KL2"7_K~x_5ӱpt3\9JߗZ)q[V(SlcTg]Qڬ2 H,rNzUCҽ>M/+Yη׉S^DžHSQY{ Sעz _GG_SydZ Z R6k̙3=ӧ>KO_+}*ۡl'ҧ&%M:Bhf.qǴFmB̭dɕ<Vs&cJP]ʕ6|iT,z7󈭿6Ē$sn}~}~a*+fL \FZ#`n^oyS}%PGұ-mW{hzflרYdig~&hcJ \Wץ_C$hMN>1D BE.V:a } Ȩ1-֐Hy%hNY # J&vJWpU)A@r|1 FBE-H3 h*9^Oe) >y {{Z#9] sJ{JJʸm(T'}@/]T L`BM  ٺtʾ=_5>zmh lj+728yВg5X";mquHBJt>!Js@@w]|G+ݩҾ0> xH))+QL$pKCÙTfq?kQ3tyʙ7ZvQ[{ͮk>5{GcSM\Mhڋ4π8sմ].^O`z58v֟ Mtk#ݥɅZдDxHRt.ÛA)\^,,< ~9XRl|Jx);lҌ2(5tߵ27@fZQrDØ 2vhLq8cz|5h:o|t bqrqГ_2|Faڿ_h ^rGMG /[ڢբpu:)=#ޤ%p+lXe*dvrI~vvnPt7cVw6uq O;w$fRzJ{#:\:" hg閸dsE^?̿_%sOMPg={R ]4ϱ{Fo)&RloY O?7c2\]vRSĽ(ɭOneHXY3(9,zyM6ag2pO3}JF^OΡJ#:1hlk~@zS砲@[a Y3+Eh8 Ȫyy噇x8-zMp-" ɜNbDu 1Kd[11kz-2pG/ B4,Zr %<]j-rMqdRI@cot(8i,fSB#rWФE7'tC*|\bb:&敱!r}ܤ 96!2KF=Iv"/`|pjV/7֫]-&^;g00d4_:$"'Gw})*& *+؎WiJ\,|a%lkQr2R: 3`k`zۯ2b d⡵Lm·3aSr (;m#>jyEt>c!$ Ԃ]\#2W.r \ʐ8L 9J`dB+/|f$RdڡZպX&G$Bkx8-8&}av|:iO/>roTI Lک .՝ĵ$% 2lňCF1r+o坱{!tX<.&-oz0vO`W~7dŭVY*  wRzPDopvcهQ_wF RȤZQ>$bdE*uL  .q&%YXf.% @q sRР3ˌdRGUA j4֝ q$!sb!$E̪qM*0ZW_UIWGU>E&=cO8< 8=j~ՑLB&U6iNd|0E?<4<(ѫ1&ly\pmZU s٤^,U}GLMQ3{-lgwF La>X.jy 4ǎQNjz} '7n53WQgwZ(tC}GIn?{Ƒ@8 #!sl.pOdHʱv}Ň8=Nf LO?Խ(:9Z]i(*P?Qf'u4+(^^?HzOBOUj \P}uZ@T(Bv꾷Fu{i4Ʀ8ۛ\oCY-[vg+;Tfs}<\&ȭ-ձ%o%is'vm6-IۚUZӝHu1Z})s(Cbt^^=s4QiSf|8۩YT%s(Wh^6ިyQxwIJ üCK%!5i~y1.{!OwT\Z[]*ͥQo9VoY.5wj4B.G ,l9ZҖ/gZl>AyȽ.8#nj`vw8玪Ilr+ϝ*ewXG1c }bO1b>.b Z#GB  ,rEi.iཚt4D{S([[B7zf$ 3) ^سɭt\'pb~9~_oWV}+i[ݷ:U8{TSAM]U`tj%樯>>q)8NHq3in<18ǥshNLW'tLiaD c#Pdc;c0d՚R1 &Яۮ B;0)o:WmJ Lc,-gT8Љu}';5$COŧ ֗A pezDR;z(őyqRU`bM)1*(G;+m еJjԑs{|= #^ +Ţ!DQ%cއ(0Ā c#%Uyƀw4("09`}!`e& FT8G"W BJ(2gu#|'fPۀ 7`\,,k椶#z Nl 2&7ιUܛ#  +AAN?0AИ3 Lmp1Ra4,F:(aܦܫ՟>O`rg4)J9+(V,X ;A2Rrhbʉa9Q'OmԸ uXB눭OQón2j=qߖQ}oL(ɇj%"̳@lba0El#L)3D w vTP& -,𩰱%jS6zXb*-'!CT`P3i ֊OEΏKIr*œy ~-T?o/^\?U^.+g:C%[$H4_8\|*iKKU䣋̨!03Z%7t io;eFa4.dT'V5g6)*A[jX%iuFC#aaj4:b?EPtGάj'; N@ܨ^&t)>嘣X2XyĘǔ'][={%68z&G~ե!J%yJQv!JC{)&ĐAcv<\Q]} O>;O#gbMUxuǫ;zJ$|A$e^9\2\^GZA ?)jE+%,ڧ `:x3}v99\Ј$rm˃. Yi@_Gؗ^8J,@٠`σiӴMlhaB(kBBis45A`]yeAxi)*c:x70+ L}R?*~ ~xE]%,J6,%eP _PE#YP`*I'e2|')T(ɾW%-@3K64{]}_/L_ܼθ]]m'֐Vג)E 06jV/~-t1X>NqIui`m)o\`Fnv֩e!5+L ;oo$邠)yLqԐm7>ؓ/K 49*VCgo8C)yi=RǬܸOugP>X Gb3Tч9%!(KEK+0uI8DDՈn»#WOIִ&L~$WĽi3̆R R_*TtYao T [kIEx6z %Q>^] F▱M\4‹Ur?6A-.o #,Wtc.PY6m0oR歒/>]6MrS]nлQ0f5zS1wyQ$.o$I_MG: h|c\Jr,Rœfo̬L*VwY=2W dn"jT= vuϗyL5:na"+lR+R-rm|Sr#4 ~Cf 5) (Y2(GQcͤM'!GFQY10 f1gra}:F E2"YF0/Eµb6.pl\hdVCםjx7RvGsU~ˎ*Spu@Z&M"ni4qV S\"Eoaۉmv$ln[?|ߕh垄h&9vM ;⾢u;^&zy7;b;%ߚŮQ:l@hrB_^]?)#wK7F\ѧD!}8v?첳[|H]]drOMprg*eMP,7~'q4Я^y~Z0^'DU1_O#~kܨIJ)e;VmZ4~ہz3rw Fw' "P.Cz`nKzDQ ӹ@3溆|;p"b{cv:7TtY`c)h>4H x3lP Yb ''eT;x_8p[+z~7~5ú=_[>;,Aބ .HoNWo}"yY'Ar{tn)xޖ)l>xV_O$|RwA!~pJ<=rlO^L =׊/!ٳ}˰ ym%b(E#X'\kSZJ"(S#oS$6bcN( cAKS(Qd 9'k[R%z nbi1g;a@[a C}V`BU)Y;  8ZFL&ͭHIb[{ UTkn tD|5 K1r%TH2O"e̩M4PxEƇX/803Lc$D`@3蘑;6:1s((5 HDc3l ^weK4Ֆ .@l(L2BM:1[a@ .C).DWqj|JF4L BLP a- taf{ K3-'Cꐑ8L2EBXPnAJjkP2Do ."E:&MBw](ƙ% 8=6T+!5 taՁ9Z)<uAc 1ʌ ; 8,XMJ:@ jZkmn"ɲšO3;]|&c=̇]:|X6 }OX2*\F*ny9Yy2^i04.I?368ЇJLa6x!5TiOa 1 y 0hrƫ@   6TyE0EqEx&@uZ)4Vوpv͢?` N}2 X sHSRkg 噴UAŒ5Pm6G[W "QTcmʤ7)3fе h%Vџ)˲ D@Ԟj5OYjsC(Z$D@ri+w%A`KFd* RZg,؊~BނNdȶeFB EsTe.قv0hBAk1W*4 9bv(@i2UEr 3`cȳBmTzr%Ōq2M!kvH |Hc%@ʂA E"UqSU5J$Xu޹TEoQX$<@еƶebFA c9_.J)V)T66% j~ VQ'J[e@SѝQ3DtaXUb#JQ@+ho8U {ve"J̭M]ZݒAQ'4 %ŐTY)Ǩ-C^b-jI`UWuDMXd3]a0;ͬ]bD\|1ʣ8I`IbYjÉs!u@0- bg3~M}WڊBa:2YWҥ`ruA"X L-ɠ<ΠXmAԠƂBGą1 P$bDM W$2+BY.*|Oyq `}Q@VǃV( 2`G[}ƂA,TG''GbiV6SRe%) CdDzc0p=¤{g9TX] ("F_b =ND@6D׷K x1( 7Db6y.e}tP9B(51 hcBAhU;B]XÎa: L:=3N6Guƪڜ׃bFP4<kFV-5* ΃;(@K"mR82Ggm]ƢΠI`&Ybm4GRPHfpX0aE^Y` GE*C DYl9i=6C:hϢ;K4YF%Gj3Ko^5 RYUۣ ^`VֵZz7(-Ia. f|t*2 c%V`&7Ҙgvc&]y7ݦzyCŤBc&AY"k.nj!f=0S7)w`)jYxhZ\@Gɪs7zSzoh3wm xS9njj>-$A%\ Ec˄Jԃ (0 +IO p'r5c5X߰UaݲIY!|UU.ϋ| %TgG͎fc><y'-OBU0P1)KGH5(DV9x%Pʑ[T'%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ RUN1uLJ (ډXh@֩W :T9)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@߰Y1)`ǣrmh@VVJMJoP  H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@f@l#R5H dcY++Ɛ[T OJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%з>ފ_OF/R~y}Y^\?[P ;'u'i|sqzC~LCe soOϚ}Iqq)7ɏe~I8(ڥGvR4@kuIo*NCuWguv9ܮ38dєs8Mr(=,0ʼ)H jTM(ݫƼڢa4^id|:H-EN&oN//ȘT3܈jwBy"v&)pbl;Io`-<ۦ tkkȚPY!=yv~$XxYעڲpU;@7̺#[#ٝv KO,RCf\ۅ20vԎ-lY$-:BfUi>[$޻~۲ݺYb.g’t.P&ESZJ5Y97}0|Дvw>uEKg_o/>ҏN_< /=7?WGڍk=Q-â}|޶RVE6fwhUTXr6tbB_䇷yx+oCoӺr>pba7/"y}-WSY;O3K3=AН^Mfoz]sأg3m+远^N^o76u)uٓ?m,$ph;x瞎#XAerA/& \. $}3 vzCnYt{ ibqZq _?LeU(Vy_:XRzVzѲU`D}~@cn-MFmk).k6}{p1)U6e#۞\uOu;LzCw>znKQ%l{uT-_yL-#C)抯b{_cjя+26! ^liOwvاc3/jj7­e/Ay7޽%)!Ӣ3dCR8SB\{bvqΘ/x|=cp6+}7[Ϣ'S!nZ#[Ϗ+,rh7@A׽q,Z)kǒ>pru+`ڀݳam?޽qJwQVs'>ciʹ dwg *m̥חf0{⚯u|(a٭9mO~xO0.Fh풤퐧r.]&L{殛5 =4&㐈}Vxm[GӢ'Wݳd{>42$䆃Qe ;%5L˶SٝvIZW3my+Mis\%1.ɠ)cYe68A&, 5ۇZxi|s۶鬏[?^`WUܥmQ߿?t7kݞ;w>DwRŢr}=D)\BK%K <],GMߍf>%`P3{w6g*\LNS믝f/N^twYO⽯45grP_P~ڪؓ??}::ǣpy'Y"zкl1d:{^ѯ4}o a.nUg%+ɓ"77mf2M/%!NdOu2z8ʥ0.۴fWﮓt?rsok3~=~w2|1g!=E0wJ.q[qw; rҩ?cb|i +/,zJ.m7g<{ᘗHلK>]~~y2I-ʂ_OV3 jUji2txAqԂ!)90#8Kk{]ᩃ\?`-3W޵5md&; UzSS5I:L%8S߷"xH∔9B1Pޕps gU𲖾h%y2e$%cyQr_& ^?+?͑c h"0?D2L$.3,E$8LI@a(L%B''\0@AaN y"fh>п 4tt禬3(|_n𩘇[&Ke-f]aEKRziުk{h|bj儉~ǿn)mP/s5;c__纓e}C υE_ڬAja[b'r'`D{X]ᶺIL)V{8&$.1YLdM=:Saqhm@dњQ:",9Q㰊25Pa1?. ĺ~UV=Krw̗ ײ7ڈjzN>ݘLYgfb !04%ZaQHPǙ'MPQ4O$R(0fKB!I%0Lk9,mց s{^`mt4?4%(8;cA[:ysq1ByN8AG\;[n݉9ɩ~JgL΂=g;q]3+L;f9gw׶d-6D8[s;*X,[ EL@R;;)vү#rwfە9+uj;A,G8]q'8+U(vНpb2 g$5#g%fX!8lPX>aHWL1'\/ @3ֿe%kxwіr^ rXv0tr9s;T:q-%b E+GFLɳ?\*LFiDǜ'T\i*e4THDJ22 q(q,D}Q"/s<'aN?沎&gKVߡ)M`FrY^|]>6`\C1R@QGaq¨F񏟇gׯ\+hYBDx.bZ70BHn+,# `#w{bYv[nT}Y']C=ɼQR܇:,##EUY= .ٙd;G`J.!EX_<wXR%f;žNOee$gl,e ɫpwU7\u7QDAr ;81^grmD'Ί6eHeE(b+0%Hl;'@XW&86+kZ}JύzAč=;♸qOJojY|JST_||a$Z,Ut,ܪw6ͳm`5 _.|xM<ߖ``%1`EE9YAn$C1OuBr+g"qZ *ёMf CK.a3ۗ6on(Y$Dr)q f0J ZicƑF"ɔNF8^Kp`5~cɘ2[kX`T ϊ 1. JjP PcJ#SbJ%5")I5!3Ng荷No.\Dvɩ)J9xlzwoz¦gԲЛ0鎉#ٕ*nHeGŬ[N[N\ɺ|qb2Jܨvb'݉qs+gYOv):NN"K+#q:?*018+Qt朘 [1j~ы[O(ڕ&e9/+fɲ>vNΡl&ih7.s޷銟,H0r&}:[V]y`5.g tRw/t(Iz4JWv;KN`$o_Ӧw8zcA&˂1W ՕzI8aOXHIILB8IYwWύqY_@t׷by<]!툫2Cp$GYGstqP}_v{#N씪{k6" 哖jZ楱hl,k:#\Rkvwt؉O'O004LRX2ƚ! FD08<4` `Čz{âY4FulvP =:%1OՕ}x'vޡdωɠ;Nz.GRSȎjΈ4k垎6]/wx=X %so#?q8(Nax;h~Tn%w'?l PE܍U\%@y "4`RJQb^<ЧfIEr8MQjp;ᲄc?XB#Qpy1ڠχ8wn-2oML9L 7eC#Q+7}!LX /)z+59`?dXǀNVN`$HG'\Rp͒ttмyn(f/A:NhadL>EV=b?q/y wg颊*!S_:E`9 kwB`x AAY-raO17tzČ ڦײ?--cY겈wea]*K#(Gaߋy*h$.NrxUnV6y{vrkeegqvbD|ۃxi97,g~O\`r␐fí~#aC2ߙyo,_-jHܚ_CDK?!_8/ԟcÂ8/V7Ig\_ongЖS)Q )o&RBFBTL!dLR=D+(\Ç̧9Y`>3_2%()K~=ݹr!Yݑ o|%zpۃi$x-Ɛ;K-ROȟl=ƒI)@w|a(4N@86s0!|6D+@C)ЏU~e-dkq(ae6b T$@㤢O:^ OA8!'kܵ̏$y41z0KF۳o~\R~i 'n D<ͦp.v<nNqܬC*=.$zjt5"#{Q=~g/Za.*:S'&(=/zwr)cV* Ԝf>ǨP?h<[ Ϊ(>ތa<7^)g)OCww L&gRl TѨ$1x]~fV>vRzPsx<`~1BAdb\Dž~z}2Ԉsgn0RE" +Zz[#,Q2+7߬W# O6'ŌDw^(JNN0FqCdhw0Y.i]n%[ocJ.@ͳO pCHМղO|\eAoB⦅|"% czHI`Mk?jdWDɦKj#v00QG9 0N~(|ۥzlg+US\jMn@5,;72^SkY|jCtK;&8gJ\rXzd}>< $In3Jm\vXEQ& AcLBΦO(V;BRTB^v/o,"Ud08 +Qyvs9K@vE+Q7wxɄ*U5_핃RByGt}dp3j}F*cJ1P"?KYRE7 }.̸nFw|_l4=/Ix{NAd, etS-pXz菹]yh&m^=)UQJP A$`% P4CL*QzQ/+d j:2c o5% 5옳طǡ sfEYL!ӃurI|n41 EnA"0L! 4<~Uu{wjOȎ +> 6!9suC#nXuoq]U\kN#=r $ X2H*s-Qir_plްOM4+tAX ze=K##! 4fpЁ $Tbw z>p%ټl/N׋bް缿v ._|ijR65rkŝ)H7V!Fj(`,{0(2"T%ج;MJ]2.d{#'VFC\#,ASq%&D)byMi姿9]Pq:8?V|}ߧNM* %ۨxJO4krҳٯ @x4{_Ep.܄}S ldmX~aKpwctC ܵچ񶶤hiZ[*@˷đ5 뚑?kG1U ȷK6_x<< 0n!R=` sC_əo'$cZq $.Ȯ#`yRSzdU[G fR4xt7S:ex> "!9A:2_ހ &NCovT_y!kWw^ 8YC3ITyKVRT2:jχ^ƌCjZo3/pA~U;Nj;CW,^jgXɠRa냀K VJvqcOBT- q%7CKEk52"D*<:y#1nfzPyRnzퟪR&OI!Q KEL!54+}`Zḟg]m ͦ^]̿[rR1S:cEf}ė!5O[ we$HZG߀[8mk\Jh9km$7گ"/\2$:=a۲ݖjMmmky"Ud񫪏U\hd go0 3{qO2|'A?ؤe |\G/#̟U"γ clQҚ$sXSf%ͮ\MY[~ |s#h:&ݴTFﲢ$$zWqoJvc bb3K{RPu>N'n'zfWУI v\Y/̫R"4ު4r&9 I+1h) jv6ger$ȻF>GR{ XeRp5-rs4 $HkX&)* 0tVx rRlT`}|QVI9`w\NSv p&#R"NݔME_j7zP\ǹEsvXfww.YW;RBM^;J8!iPpfCf/9>\]  &*MT@՘m~Ӊ/O9Y p9}8-#Yy kD }N"67Uߒ_O}D;IՏoӵ,}Ky󠴗Bi'GVZ HqWk.N BjߒTE72fF%,۝.l 7&m1ke ֕, }0Ժi+Z!=I:z?n)kPL=i!8 %@Zו,}g?뛗&U$1Lx4 _./auެ; vCV)xp_Gtop9ltlEŁ]#L0?$PF)'/7H-ጀM#) Ɯ eQ'$u;0) cш 3f& $ogPa,eqE{vǜ2k!NSQe?oYc}8)Cm:}|:5V3 Jd\ T 85PRYG{ iCz" n 0g>FPhhꁇG"F{n&M9q))bU-)ƪSo1ʐ;y mamj/+1F?u߁ʽGWiw]ڈxZN;0[U(愣{܉4gi_7?(b@i+ʷ֗vHzKū4*SxI&yԺqK(ߤ:OMCrGh!3g^=Ͼ6>Vu+UAWZLLSGXQd#~q|4v2& c̩&>,@c Q1_3sr2e7lO\&BzYnOA:u{F%b7s ZM[_0q&7ӡ Z̋_I)mݥ9#{= g:l]&`QD6K&b*(#xO[kCp/|X.z7$Ac(YJf(=-ɂpN)Þ6S"WlAa➬q54Χ/*iw>c8/t:/5-gQl#&1_3M=Q dR:9|FdfQkn0j*zv}WI*k9zr#Z~FB]x qʹcT)oX؇zT;p{̺?>-Bi;5Q8YҼ;Yθ9}b[8vvT[ϛqدoGg+Uwq$5wBu^&5N"MA+=TO6O]JY 2@,/&A& OtďXX;Us}{TĬޯ.eER&)g{;?#;ny>>=}YN [5Z=ZC^NCm 3/of i: >?+­x&^K}&+L0I_hE1/}ثsVXO/9zRϒ^Jk=Dz-!ay)m$|doh#׫$|A_N5d5Ik\_%0D^?s 8z]>ԼuŽ "+B$5UԾ(yu?=H#.ѥF1$dJ~ Rxn+lf.kD` F''Q2UŢʞaS]s*yE-.4wI]cȞ9OZT*uBg$ <;UB kSlϙ>[GS IyZ[,;S:RSxPR †;[+qW u "D$)gS)[:RQ,! 8ȢBXkKB0{eͣ^*'BJW$Pg>[iǦVE)Qx)N QDW;G] ܆$BF#Ϋ}3Dzw66+xJ0鬄kчa_̭i-fZ#q$)FU~^Q]0y}FA@=~2_ eM0ct`<Dũ'#DXf7+IYGQEbpAJiעӸ:[ZPՃڦUFVTSMrKҷr`1G8ky0d(6Wt4Ct0Z&j+ũp U*iZ='W}^;Q d|Nȕs]m  )U%"lvRjwv5  7YR̞i͐F"fL`\2]G;Ij|%n7FS" (V] amCQYx!`.X([%IO*q{E]]G[xHg:ck =̈^3GYV\4]NC;(jc|w=W:._*m9(E4H(( >s^,ճRO<漣4+u@%&|La]U-!KpWr[o4B +zr+ۤ+TJUjHYS(yGoXbz@IR;x1E!l{\sz$otxqwEQw=5QsOcu -ʡJgM]V[ m\؎}#& Ca`kೢ-R `s:N+PP!!q IZr(TQb#Ψ &T sIa\yoFJTw400Z831?v-oΓ%_i{* 6oԳyBQbTgnH<;\5`m_Br pu! ;0* {_SSK !IO'IR'طS ֨2R啗؃nP 7*"/;gKŠ:>bu{O)Zb (¿:[K, fée(gme hDh4fv6st?IRw.Zrd|]@0H Im=:kzL$Bj7kg$MN'UQD Lά DaJ4ȸj쁜!][ہEXn1n2HN[.W*my+jN%F7- [R})K AC?h,?I0BiETV8QbSlYr@кV,~#fzˡ6@] k!o nDpl(X):q SDFCc#bMc˸MkecL"=v-Q洙eAv|ܜgUm{@)H}D87휁2]'!ʼnB*I9m ɲ91Zkn+\ Er֖ (u2i-#O[=2+ qrRz, G kb&54=b`4U;(aW>Ģ(J$(Њ LX0H""B‚5u1)|,9&'vш]kAUERstC@96;δaMG@'"a=>5`r~OkiΒdpXƁFI,d49nZӨ(U1qE<=?1` <j/Q{IuتL>Nɰ޺+@nFrEW} !\@3w BEL4F+}p }p|7X"q#xsͰ$z·p8 "Z<} U@x(J*Nah'e!&F`_)a w5)Bw{7z3^32?mp54,aVغU݊j}1|,m#/X[13@JlΟ6OYIRsf7#GXdCKuPm۔أFsZ-+u` ti8$ʤ6/0`iDe}tf5(J@+RQ$"a'IÆl&?\F/<_čeBfhlpy1AAXk aruD0(DU~TD:F Y1#}F ѥyJ~B{a,k!G]Jk[J)@_W00aW6@BѩٔR}0qT'J(ZEoX/qeƑsOVDr=,JY3#=US:#]] ]'@u4&=Z| FM w@Ca+O?ՒgPCD}tKFeT\64WV쌇FZpQ0=X1_뺐~fnA@)bXR+DC˥ח:P\pJs'EC/_3%_~ԣ*S/bw]?ű5t_L鼝R5򌷒Gl6FTc㙋Rtdg&'tƹG ٣']A<2D**xȈ*qp; xd,HDqyh)oc$9KGqI<1+e/މV4sJs:@&/MX%9E%+,α+hc͑7IUMD% KWѭ=ey>_y`OJ*\؆E'!c/cAqaYb՞^Gӎ`#+cJ,H" IePeQ!j*,ȏq0?Gۻg)TL$fࢥ9ld3THICc8yHK&4LE@4aL? ތmΈ!c(O>ފa˖,_􋅜AWC.IYK)9rФJO:ˎO޾s0~. *T.FXH̥ƀ $rce+]Ll.W`_t1sּWMpse6qIj& NJҸhzl@d:L$ [9XC fm%Ӑ^c8{=MvБT@'ƁSi^t(6[u.0eKZqD$ ~{9 Ym-=ؐm)I|E:ttډO$-9Q{Qu@f@ _%t\(E \{-iU+eUU'Ji[Jey_(5,$OX0H""l,2Ш .crqa X=&hIe?~XdAQҢe8Z n2dH{XE5S E8D) Sl~Zzʶw qo4{JF%Ahh==G 6-o+|n1l4 öcD_ے{nSVи.̗Hvi'1Oma2k!M m%41_8TI؜[`݃O8Ob1/~@H\Я݌;\{3V* wdG8CP4=FQڳ?ebkҽ*G)_Y\Ypv[82 kpf@ʽ A|o/@[SyN1^VYktLC nNRTwJ'x[U pRz|r ¥ɖ~U]óU c& J  c"az]8D;08z2;O$j݇*2͘-.YD&ѾfTQ8^q(I G4Ih\*c cYIUh,o˕qE3SbHh DUz{~,J.wm%hkx_O 54̒}Xt@kht ~$~ʐ'}E `oJS1?u poHPQ}j%kŌ|GaFj0GPi _r֛ !/BAQ Z%]D~6q#J{.7ʖ01d+ m+IӮ^6Y[L65(*:<-s<"„^V [`0O̍BͳqA!k#0Z\Cn*XO2ze=ݏq-+?/?C¡gHS<}.pEFYkTY4;ҵ?hf;P5tbӂ =, ąy0P6ؽ=~yoPɑQ;KKU#jU}-D՝Nd~LuoY1l#\; }G-SHW7SUdFMF~_oc`Ƴ)Oh_lAP6(Xlvc&ȬA dc}=>ʹgR\&!~[Hy8,+dhEbg˺7DOM$[r`7nDȀyܵ&HB$3u65%_'Gw:y[IR 0iMbBg_&6G3u ~笇ݺ QdO?,1GȋsH6k8EذG5t1L<dft> ‰Y XqC~˷<-=<_In5o.7|/<,f[)~yyXߍa Gwt1}/Z}㔶EgKC8&ҵ!|VtGqJ:q޾8weUHd|h^VVH1 `co7m羚=%/t`VJchr>e糉&iQe2[Z,3͏[+YMx˥ 5LŕχJa^)qDT:<7VH62 /ƿk՘8ǔ 1y]_Rs??⟾~ϧoc5fhIlK[k󉄍z̪W~%);Ri@q^: :uDpN&l,՟,QOG-M|"x)ZAC WDo:D^Rѕ(E3QNC@7b?_9ig,G+Tro =Bt$B,q*DDv !vx[kil.KIQJ$5W>Q^㌓LIJ  1!C{0ɰjh9cB+kFv>OH"e=Iq0lݿ}kqRPwy^ DiN5VlDye >|w8ӱw(ހ7X?{q!:X"oq*mU0XɆdkev}rтejVa޵c"eg0˘K~M=EoxI${(NRY,WQNC:s9"yB 6߇F|grbDUؙOZW֚f+ m*)U”VP۶fjۚ5SɁԬ9UP\Rk=E937`[3-7xGάݩ?@7Ǥvgnyl(2bx2{56y9'pߗtȯSg/9Z񟓙uZ`ylTt\,o0y$u9H^xLdaš fό#X-8(mQR K0\A\j%(-n Qp,6-lDQaUi0+I" q?k nCAѐځ+pti[|Ђ+g2!AU*DkŬ^R6&dϧZZ47u5}u :P0||bh}%`q{"Z5`_u?ve,z9~m^waZC ѢZ+۞ÇN{opOX>܁W !Gt`*4`Q7Ѵ XGamV^:\LG#LneK3Ba~()`U$ŧèZTYQJ WhROPY[t PHYa?;U>є|QV kWU*P6/CYtʣż~dYuΚ:08c_ƫp10t&NoWZǰ_`nIpT R?q(2^ )4\l :64r5?[\Sk=x V6ǁk+mK\ng w`KYe\N5?}. ô@BZ #{kn1D~ZWHY`Ggj-qJ![/oa*2S~0±Q`# 3╠Ȩ+zkBv[}eK_v`ZFw]wKs:9 Xgߦ,:|. ΐΗLh%s5`2E [ =|7E;0'_ me参ҸĦy~QM'vTl 7^YDo7d Yۗ@[ @iGiB I}F:qG{?$Lg_Ò x &L^bu(4!N*@XKiK=` bʨoz '\rr5XDi1P()}o'b'"R* px<"s(xU߹Z(6`[ɧ9M/ާRj`I01vUajmV0:F##]GTAD ]/_.t.Ἅ" 9d-UHQS"ȁ݃ ^ÿY3 ieIdM՚i A[u`W&K:GnZ:2-K te(r`-jEèRmƀMw9[^ֽk'DP %q=-fXvLF0G7)K548uFĵ+?̠rڟ>Mߧ_ gc?$`s_> ~jq<#!$Uһ>0<߰npF8&z}b#Txz:^x;ijF {g383HI*)VdӅh];ӫM'sc8,o4U˺~@{L.A3@R<}2N/2M`xP#.A:e#K#j7W_&XR#koF|0Yj vXk|G󱍴4'mAp[%F [r`"_[if[0^βߢŃiJNo a>]R4d(/iЀ9aOZ<,-̣a)nq"'(eYg31#/ o9o4hf;pLb2AKHs]Ή bEPE9KKtEwk7 2{J)|zrGޢ uG1``͌ ]"ͯTKw12~YxJHb4t#Ws_ xWY[,-U* Wn)rS{aX`yYDA=OKy2M qr4 Y&a)Ltu`ieP΁O.//AMΚ"Tޯ&GI; cY,ۄT35`|}0&8ǩ;J^ 5VWןll_?_ݫ7Y }^?~oU ~cp%R "XVHb*ГgF'EPĒW<}5k gD϶0_KvCav<ޞߜE9K0NX1eXODSf[;XGSNbRJ[ٍis~+lۂ = l/ңbd1Jjc-b%%wa4˟IBJuY~^:{}yc/Z,~ piCYit3Z$is4A W-1^jaILS(ί)(Zk1 UW}\O*u\iLy?뉖~T ]PYNo$A\Hp^C: &Pu-K9ѤBZ 9f~`W95`9hk]rs.G_u|bNWFK|NH\j|* v`[ ;W-JH ƣIc+Ie1ޣمKh_.Xo6T*udɸUVl4"oC+ ňIbvO7,Lcܡϣmlb]uGѺC7 Zzm JY:Ꙇ)9`hJ[.%OwPb{X#X#9kk-aon:O'rϽ6d4O㧷ez\͆q5 K 26Gd^^jTRB9J靨`qY#zK͞o)cQ{y9 {n|n/'jDv9iw0-?!?@DLq̩Bc&Tpb$SiD/,9P9@ƛGmo.3JPV#`wqʫ@֙KrbԢE%Ҁ "hvofWZM%y2hzpԂ{|u6xӳҀ2#%,DeٵK.pod;52]AcAn1\]KzBi9H=*Uhѳ0~K|r-&!-]x*$ݏV }j 49k:Sk?^ 2q>LxL p4J)2bÈa  -׆  4~fP+]]HK4[#=05ͳoq5t#:\6'˔u4VGÇv3$geRTʑtqV@tqxEf Jn ?&+P9Twӛ7'ʙ`Q֗E ,:U}"]dT,tRjO q_BAR6©^?k(+Dة8h,*ʴqIszS#S%PĕG~z)Zk^/gtpTӋ)ew:ISN;"rX yBɢ@\h/ 017b`9JrS<sTAh>\5)z*H+X[E۾ZV_:X#v"|P#>%l&R,p)9Gz{z7?w]?Ǵ(RԱ=!0VbcܐKSԱa%/0#A_vgmJ? `]`,6_+,${ߧzQeScn[]}S }HJӏn vc6hD v}7z=R~SP^Lw?) 1!RS_"h~hH,i2#sd-NЭ}#K vGrpmT/-jfS;`e/_-?¿~˟oT>#:5o|qp_; 1W8V jEѢ9 IHѼ]/ɧ\y8N5dxϥ 6M+.]!C2 (Bgдd0Z6jO{ ѽmxCS(ՌNCXek (*NV4%\֛ XH&H(%jP16#8n$45V]E$ 1:7[ӻD3,f&dY1s01$<_T3dbΧ717,mk9%LD+dbRT|ԨIIaIŐZqH|Y$_}XK;/R*%0 1 fIrX)y@KUWdQ&6g q0͎ٖheXzkd,,oFb)AtܤW NjVUAIEŘ-`V6.8[sN;ʴ¡ 1XܧAVtn{V\ЃT(@'bL&Q& E@i=Hi#cz]ljճA1$ҧV uMW(|P0fæxpJkUlT TՔfPF K)*t*2UE#'f"蹅VJnAd >-Dkr /(/,*4F!SjtŶ"lj%_JEh rk,V J9K9P(&H`ʶ/W7!Ђam%".`s*B&nc?ՑSMGᦦh@ ք Ĭ k!FȦD2рÅW֢Kc e`E"G$' ̵a:_tʪSdJ6\Rʀ Ҹ-0b"@ Y)wݣuVb(aES$+ӡfle۳̕)LQגЄjЀEİ]B ̉OvruK4ً8$nU&΢-[KY:'QblB$+?<XbA 4"l{+LfSDLQ-hC䊵VJ\*IlL6Uib+vfm,[ٚ3H,x WVʮؚEDP ]"}Ll>.B/ǷJ+dśOY8!\l`P!?-~/"fj/fo_bwnk O!/`'o-HY7{"M>C7==`>;e=Ϸu=~cփŴ]/oR®Fȟ4ۗq'{|a5~tfZLYv1Rs{3ݛup!U{󽻾`"f3~:x2N7s]L7#!xiW-tx޹ړTN&ܳYEҌ9Wc"6Z .[4K\12:H n}9,s>w jl)7骂.I8jS)nsi9&vj ]Ԧ֔+F-G A8NT#DQ&uM{V:}ᗞ37&^ziZ.Ml}X1z D=nª 1T/PCB \QR҅A6fm/}@!%I٫zZ_*qX\e9kKLpP5 YTP*vO"NU! _55zeHu(yIrSЦ&č1"bLxT7$G;>{#4`E9YWk9mejYӐ\#E3mZ.&P"@*_faec;ocċ-&Oyqݬ4dtL ɷ 5M$D?+t)!vƾdLҿ( 8&߭ӡop 5Y\On1L_oVptwDZ9o0@0Z1s;b SHj\[>9#VV19F߰l}vK@g77jG|#ȹP/7r)վ!)D5E4)K jlO РF.C/^{ϟމ?WEU6lA1$-Ð@ֆwyk^[tA.H.y6G#j?z%P?RG % \ބgkh\Mޫ&6aMX+Y! Р}S^t$w|!R&j\ TeOQ5-[^6" {ͲLw텰VmYg<,G2fզ2|lzڗ \~2Mh>ŀ[A3Bbo]7?,wLFX(ceeC4(Vo̾'2b `C?ٻ8+}Q^"#2/ص CHՊx'[t7&)R$ꪨ'"rZ> 'pcs?97(QBqpr'~ONK#s2S'kylyR>Mx䁠 }ן/;x6il~N/_lrz[i/Fo۟\ n+ط:`+M{D|6WK_|^lwW˗yГ[?V~ѭ~:x{Q3w: ,$:a惭zaHkmI&v5 _@yWr}{o[kٔ1:k.]<t1ΫK:gz^?_.=Yśp߽Q|(`ݲ 07..dz3 軵l`9WBs?/ε]=b ) {Uç,Կ+9=Z[`zOjnblDvvmދM~ pMz/%(xҬoITT%j4v!qA:yАbɽ+HWi"O9GZsɵ#$/=2<8.6cb \RgQԎN'_Aɗ~/dDž,e\w\vB**r%ɋ˺rXmxV_fz̛N,?{837"pHVw몃usj;?+NG+4;*r?Z\ F}PhnRn9xOA].>5Z"IgpS @E[G6ԋ7IѠZ9JY#HÌH%TWmU}mq8BJYNV"O|p1g`kXmvt3lp͚\94"AG Q-AF ٠P:欵{k/v3EF?Dbeg!/D.}пہgWUe)^*xlԜMds9)#jsRG#v}'!6SHJ67+Y6:A(fIنVΫlM45L4@ڮ:,ffh 9d\[͆x:Wc:Gic͗-.]8b8,7|S-6\&6 U!쮌GOZ \|D5}1XՀDIGIؿܢvX(p%Ԟ]Rx;l%isv2xL󓭔ŻL_{vMexⅳ?z'?C|u2NA'1`z[y~x9Ooǧ:>ZDc<{_3O*,>C!7 (@t1'RXJ h,g aɂ'\sP,Mٙrݓ+lP!ad20{#~\4wR2͎}{~(}іښD'.dzøw&`B$&sKd-]on I@c7*phEqgZ@%⠖NСٓv}jK-~b2+zX+v%|jfB VfR%}ᤗrnk>] '?eC +ZU;="/,r Y%sfQ|G@3zs46ִ/}K.@En7 s>BxXFf΋5T`沛{6cd+>"x=UxIK Ζ.,FH Եl'@?n͵#nDjTT;? rx"|;TX@=P +@Њ 4 OP)6*aʵͭ8 j`jmO2,̠#6`p/Y=t.# .uy E3-Z{>B-<:@π8V7 "E-Svb>-b~J#*BάhWws[Ʀ:5)q25Yԏ%1 Pi‡MuYl5'7Um6EBL^g=^eS }sdN4+NsÂνaA6,s*p!]hJP>/NyP:]H #Xo1jU )YН1Jy;uΦ>q]~xݛ&vZAx_RIهm3gb I~m;^lPd 9ܝ V^$MÐSn^}50B)b~ޯ|QBr>hz Gր"#nF3 0OOly!nd xv N>ue4Řn4q'ա~;= ˿}Fy@/X t+TcUt}ЕKs_\Owڮeɾ8\FssL|:czioL ӭBPř3k^ ݢ| dj*7ˌN)ts̷K,@{A H9L$vbXg[^gvt[A5}0͂5}+ON>3'.' 6<֚> }ߙٽROjX+ۅX4nvMšS;<ƴUP;`^JV3=ϻkfJ>[B8Jva 0^wX%[A|sVد+v\e6?y=9J?j玄)bwGv_ a+ dEݑy`W+f3)"$I~G\Q&[+cphȇ0/zrѕ }QLav_Wj s0$ΞGJ7j8+vEr{vbs^bԯnv@wm\BWj sX)i:j[A| 6Gn@|HDkۍ82bwQ3 '[>, Cu&Ciݑ&lVSݬ昗M]]yD^pqo4N$v?4:Fg𛓾gaSW8^)TfRwN^^}{vsLCה>(!CƲ933et~*v93Y9q8w攓x9933Q|=sjH̯ډ>1c86#9G(3{"̮9ʹj#֗:̹ݼ`fvM,3^|gvab?>H=<j*k0g :1@D23P[PJB,+qr(yEwoϏRM]b;nhPxo[6z2!: Wg4\(R<]ši0UBFZ"H?_GM?.B/1e/Z윸|>?77um9;}}^|xsX&㟷| ROՠR4{pUp-@- }"I9u"rS#:Ng?.'h_S{هq 4>ӽЇ,+SP6ٹ6 ?eX:x_E0_tO*//{m5ӊ0 Q<+,]@d:hiyH9.fdHq׺Vr5,W-j[u(t;Τ$jn%y# Uz Y~B\9,\}U`Pp#h "nwFٮK-{b@(9QYRfQW LκV bXػʜ>1DhQŬpkSuTL:o)Z9!"ii @̖.!0GƛL@Lʩl Oרrj6ŀ#jNI~^84޿"yDZc_lc`YCdq$-jlRltWZmbc3H J$qlƠ.DPW0]3 A0u#LHBZ`!8ʥAD:.Sh/ٸՠah]KC.1b5҇WFaqPzIʊ֮F!gE]@weعS[tXʒȶ od60oG3Iy7S}y(}!I:;Ө}A,ʠc,\6 ڧ܌4Rs J5Dsbâ{PRJrw7Maީ Y@!90^GM8F![`e,9۲gnbL{kK;\MUi0ʾn->lr ɆcIX TW*ЫQO*\xoU"NNǻD@QoǻڼRP p%umɻdCxֽm]Amjc&-}v45~R52~{Kϫɯh5y|c<$g:vE~Md4dk=0;5^;3l?yb{in߂sv]5QŹ8@1 cgj7S{=jG}^fj spWG n߸ =sjn苧v3b:FYL6CgOfvOiw,s}dȉe}nv3BWXevOi7 kvХ=ai0vv3WXܯ-I ^k)0/d]yb&C&zA|cխDbi68z=G9ےJS,T$@ lN (Qx'f-F ( ]j2.Vvc Jg*4"rRв. y%̵FIQV*f-{AjK_s*jL4Tq(P 'kDVA&|"v{XX>/"J因P ?ֵq3HA#0G*5Q*C`X~:fYرk5m1RD"ZtZq(/^ 4ʥRcS_e94`snFR\ 'N?u˭)R(!JєOH*d[ZUXZ[Uـ,[h5D$ Xm*R˓|}E<;$4? _~;ycM{Cg&85O5>'[)XAVP՘h82e&44MRU0O%4l ,1ʒtb-otV#VK(ѭz~[pWjq5̱p$KexdAȝ SrwqUb+v^b+ΙAÕʔ,*(ҴA>Jwtr׊--m{Mٞ߄h6H*w6њutD-DQp^Pr:a^1TTE6/qVG^m{:7{MMd>`akM&/DZ!2!dwZq$ (駽N.XN䣟ogMFc)PV]jj45:0B j@698)]Nٹ^+.֐+9l>O8C4PaؕPhUb ͙h&v2 ȸnYxk[ݔoQZM-w﬑~Wڼ}fxa߰bɑkQH7a!㶺N_LKyȼ٥mrOO<2{WLH"Vg-Qa$dKRZr)Wrr2m !'/#ݽH:_.gg*[IS̕D*$e[0| !шrbrh4p5-ww@n*''staoߤy$e V'IB!ܤa|›֔A+mN&ҖN{mCd4>˼,|l7{/EuW΅IRWKUS6/}d*PU @!~BcE=qӂ *w:H>l#ea5@YZN䜕$%yI2'ͤ)L%i#sE3v퍔(t^K޼`j}r~Mv2sIҵ$TDݶ<=gpVL/eZrnMQZ"ʴ+&3YhE!'pTӂϪ X .m+"+씔+Od}I!ɉE㯠UH/lx4O;$ &0K%;%ɥY@>ďOa^]H3Limj-G&J@uAzӯKQgbv"[s»gm4waH_m1kںO*lk d=.4 9dBWM֯6d dO7!L%GiB|J@:qd 62P9Yk#o\4N乴HKa ޗH?KW ,vnm]9[lmq1&7LtbkGd's&ngJv&I&5IFU)R `h_dy7ڃXq;f0/  !z>Oڼ»>=[لiȇ@l|4,ҝJ+mTѵ{6C6Jp6iy6$ IZ7q^9%Ҵz 2a!*IkGȘIbM<JFq047k-b3B6ҾWmEMR_4uی2GEchn: ;el-[ J6ܓ d/ ݺQhH<)sRsM͆@ i-e J6!3HO1Y?KnEc%݄vh23'cXbFeFKPԓ+r/iqCjmH KZH{Eqh#Vy!eLG61OzogvkȎ]oq '(̾zB$ cZ|y;8_2^o2JM `oxg{$l6]@"O]oSmRV72|KX󤤉1ֲUw-GeGB;ݞsV"$V,/O?|Y҃ʋ1&⧥R\qtIɆ{9tAU><` c [ /z|Ŋ,F#Z `{R{qp6XJ^3z /YˌɂoU" %23J˫-Z צߜ~ͭYaRbӗdspLyBhkWgQ.pN\eCfTRXbOfuc6罷U`|qm<p+jo?_/|H*q9>=CN9np~}2] ۮ゠d| gzn8|{Vߚ`4 8~(Gc_!0EN~0Fuu|zFNHϋǓg ѩ~xϮO>LJoy>5s4NxB?{Rq1}~+_HMίV!4Lk!-=ccXrm EiUopG٦V&HƳUq۩ŘgPK!W7.֠.(pOB\@A)fa·neͱ&_q;fUfc| 8ۂ s] ی,o2"l_Zy>R@ G5pMz֑%%贈7$eRRǃe7؁݀U11r#-M|W'S3 ߇b;͐^Iճ̑'VN$!=/*gU_ oG,?t*˿|,WF ф9Ќy[˿z}Dwz'Zw eU|g`M>n0@\$Heo-W3"5R;9D'W?I/*o^ m yf[*tvx>K+{z4ioZ,h^$(̿#LE'=x8U*@/NMjnd᝝|h)wnٱRFoߟbOS  Ӫef +^+]h-!MSR*R,7}֐ 5@μM?<:D EVy iPdj>dF iT`]OEuY2A1\WS-6Z lԒ% uXRameFs3Pf}Y8Ou`b8[>'1yp$c ;UoTC8Oib8>77Rbu{'rؤ7t3uA;tƖcp-Ǡ[1rxab˱;N;kciMS pQEUn}$FK#SF=q؈E2CC4Z" }$؁" ]UFAy8rH# moSOrjxQ=(^:O@w" tCЪvr;B/ (#[Qsx$@jN @"kocԡCIx›$%B6>X9doëQV:j_t(w*˨>==\&D!-jU /qx/ o39iSNUʺU="ysó zش5]A-H&0}+}=zumL"?`Yy\ :=_)ҵs=\p8Ip:D-[ F\b]Us`= RN.NtqHtqClt^C:z :kt٦jC.N5_8?~y,^c/I_Fv zmiGI0͏dV&.!e24GyNFx;7=>0{J2>Lqh1[[仳hOM7C!U̓r4QE +FJ_̓](՞_O?O".h ϑ_@艥U!kf`o |WS˘brZ ygBx_U.h~XW?DX6ה;3Z< jeM 58;“oBD5!8u͉xQaVͨxqm7/*%tצI tڨ>&sf*GUkDsN/,"-SYjs}+S%TʒYo K Z}+ 8\_*z.XpȑH eVŧ5-W ,kFp/:;/ /t*܅jYyBS.C4BwJwxY#-E8w/8K2Lր{PwxN|NAVpVܓpJI~k LT~CSLI4XnSآhyJ&m8R螎*A&]RO;SF9=0', &<ڒP!lX=Qt#pRߙ}{ 8An>a$'nc@qq9 c/Qcd4I]Y0wN[Oލpϱ fmdgr2څE~㙰Kp$O.Bi$Y@~Z&vNʵn8Jrb~jyZjY+[?{OƑ_e6ش\w 63ƈ ;Pŵ.c'Wݔ"RݤȨö1ܭ2:Mv&;N[1贕p`d3%N4m$],A[ ] w4qm#B[} o{58YtP[b j~,اiCu>"x0SJJ?CO^#'9ls]MD}uy¿dJ`= Q.RD<$ K(Ö\\Cv'dw4@D hwд"ԂgIȥ V$jZ͑uZQ1:49$L7s ܶ|& 94 :Eێ2e6WǵV~ϖBqoٖ\t~A~hT-3`H9_b٥֜4DqEniMCʏ ez_rJD34@_PyYj_ڸ-j0Z$OE"ygR"IsiW%j|4j &HE4kLE|tDd:d!yFoJA1JM#'I4C.KpVSO˾ˏD!Ғ!unS%rhCO Kf0sz3,τO-(34N[E&r!0=x0Hq^ 3 kq/$0jIyQ2ro(6 &V>/e^$_Y-VzR [Ԇueuj:mjN5v-^aPʤ>ׇqd<PYPe|-pOS2I4s|CU6I)m!9PIQzL ~S-'xK/*ڬNKe͇OLtzYz&Q X\oxi?m+iٟh87/ɟ&b>u? $Ri6^?m ч\m*PNLjWlلLӛW`K<" uAH02 0E`K0E\+ *O"r"\Bp NYʴl $|Yj.E%Z>>{ S_xt! ^^yww}8M'ە"`d|I y|-CFݯSDhynt'Pm%/ܜ" O6ј UFݲkr>Xp-i iHKB,4+DACGyTvÔV[eJ6PuR1L)\ wޢZH݋QMqTh>hF!}D-Lo!%z[j91>wk+ psÏkl(QCq'|DZw!J4D#J>V(ea4 q & nDdR.JW\k ]ti-AgFx08D |kvM}9&'W5dɋ2wP\[r?rv! q2<}%;KG>٫WcjJW&w'S)\xW6+Uo <{۽dɉb>"#jXOڧ@2f T{xlPY>~Bgi,5˨)V:`QMCs[ M?r04JGQdSڠt1Lt04CAidϛgTr _zRLb2e&3)fk䜳˥0vMHyJ.*\&ac3搌ʭ.iZ whw'w >O>ǵ"тg$%9u6ԔyFK)8|D4֡<=d(?I}Snh;}Arzui}=zwH;owu@\72:89=z^աFs0UNi/Rz/>q(I%3gV_N"IGQ}y1{ef4/ȿ[ =OI',H 7bgJȫKAmWP6ݬGq EEb3n64w~(dd-(29Z9niIde;VS3k; u(r)Nʦkی]XS on3j9 ,f|a\s|΃g% [0E^r[w_Ӓվ DH>TtDҬzNSeySe_|w߫[Շ }W]8lOope"Q-V#]L'ݣZJZSuQW"nA ~G`23Y&ֽW,ym\~>xc;MG&Xk^q*6cJ5R|n댏ފtOc-NzM2=)\J\9b wV؞V#ZꭉBlTm+ fjUFR)֎kjۈZAɐ陌Os5^!VM8/3j<>J̙zѿFTtS kl4"AHxu"G؜%yxvnbjBl>\ DVuk6G Z˄@7H u58'd_4'7}ߟy{i]NRL˜_z a hQ1т ыSCr?VT0Ihl.2:MAƹ*PA!quÒΞSV|T7._cLQC6\GTǝ߼F6Z];k틪Voͯy_R٭9~igT}񽸣tK}SxKoS'Rz6j9"o:]2 3D7q)X}[94Q#FcAˏ45W )4$Jp`!Qk.9\6=MhP*[F*ͽ}] 7ǵ$?ƳO=]<?yEE:[ӉPs䖳78י>c䛟Fd -a(6Uk]cH[8 U@va9<Ў(do%՗:m 'M  4mGllc0׬Ԭld_@ho?iƔL}%Ww>}ףvRQNrqv^k+AS3A Z^yeJ7ކ111Kz$/X}Mw[8d$:w7^Μvi؝ջVx7e< TO]$TZTRj}VSC0DH!D% A* $( J"@ ^:iYN{=U;W$u2H`Ap.EcG`>:H7'@l,{d?Dk22PYnާ0xr=y-,D^$nPDauϒ3K pV:/PQ㻁u]rF8!$׀ 1PO^xHOf=LT:cE&>rFFS236C˖tN.*&}4JBL(udh?Q,e*c.rAn4ZJRta]tGޘٍMSS̹y hP)H9I\JxBT{Y?¸qҹaZHzvq1D)e3TƂ^D5+$b@G4ZWD*(9aօƅ!E{6hD$HD,8a@MIsޑmJRm·#Q`.E2'b[Zwnʷ2iftvuH T*BLBcTqY}l6GW-g=sRV]]\VyU/oG{ttbYW(`dQT_y(N/Ȥ\TNih;ቝ)9 idqus~{'ў^O0}N|PuU4M ̛GST}Yc[D{uڦ0A"Q@uPrFmsljB? >p4eѲܳ:(FY]R"ԵȥEuFvhfXq-P9IRTSnT,gt_GK)؜[5c}xSHglе8 3z\$ߡGrg\Ŀtx&=-V%>) g-q@kZD͡\Er!m\ ;I)83!Lg2Y= F2i(|1hvN T.t+4D7"iRGem@3_(B-@SX)N@NN\| ?~yvw26ߩefc(O Vf aA&H02 hBJiZ3Y!Fid%T!(V^wjy$qB38iJiYB88g9~؃ /3*k&`\}@8ƀJBpdk d2@Z-ՀoN&qYYBϥЗ*I2K=`0\7N g3Pߧq$NzNgĚY rP e9m G*=--`ڎ ]hYx[ .baNikVyNlRvݓ,Үh[t=%rSrB x\Dt8" q,52JdA&HuZ :k: 1;}i~6C#x~qE?\퐕6WvWu.~ôFevs`rd(͸"+dEs$>g: NQ`Ou]͢w N-fEȨ?opJO[dE'Of Z0b;Zßo:~y9c=;#Ϫ^[a7VbD͌tvo| dhL$=*hBK{Sk[;8Ob1`ٝWa{Y1MbE7yAS*o9(%b[*_F KR^bÀ-S5 Tk]˨8jK%Sv(5ʭ@1Lۏ=*P t$ssϦ4sn2 0?j;E=`tUqMƒE c 4"FFh J`4NT[6!/!j';nt5"͞ʌ*z8 ٗ6rpA C4S6p@VDw?o7B-cGv2oNAI+;Fk*-EB@.ʎr3aFv]*}߯]S=uZG2RD , EV˸-xH< O1d"2@F1=k5vI̅A<׼k0j~[!yptiTi|Oh&\#X5D16q1\[,a>Gqs Q`Gl%ɾ8$҈!:(+kdl_cπ[AP @!39*6hu>8YK@񴎨&ClDr_.ߦaa°o'NXr781Α!L ΔT]tu n <3`P}E8u$dӾp(Z6®FfhG G'!PXW݌PXWlzca6ۻ{n̼hs;J ;)wx_V;kys/a>>і= 3"-"oUvɥ'Ԃ DBÇZE#T*K/UA\iz_A= >(Q_1Ɏ!G1ظ^O{'̂Z 0) CX2}WKf|ᒻ"Pڣ(@ُND8qL{n܍"PkhP 0oȷJ}%8[3roq6ƆƝ@LWr u(l|F#0Ӥ#J|=.2(k1WBki~nKD]OmՉKi}667<z{4z~GIwqP}\"OC#F9)h%z\T72=S"Z;v")fVzc'1Cq-(2k9:a{wBY=4R$Dc#1`}d$ ,4xB%4q`CہZ6|0` P`+i߅ EL z`OsIK1axig!uYHvRS%7DK3H8"s@2Y=,FDSjV["M%mہZIڶ̨oG_u; WUp~խ;b(#9a8R6 zy$0 pһcG,N 6{%z`^s=OlAy-(A4r /A2Ž8aYyC[DSN _k9Jq\Xɷ:- WM堪~RBg繍PݶL,/{̗@n< y8aGÿ(la\t8Y֮]d4L$8pe$}Rm5s)%eˍzHj!]DġpS&a3FR.]hv9l JNG m̠]b#b#)%DLD0IޛO$ø2g0؈DSbݼV]v vhZdYyDHM1?Yfj9ɵi*R^>?lLv oOjPFIu-U SQ݋ "ŷs%hkq ff\}+~X coI8V=_;0m0B;ͺpl?f캿`ڨD .Gk4D$NLjk,! Romu n9E$'x)8؂9 FJ$iNz/ O.@\1 ӂP? \<.v`ו?3&1Оagwxv DsQ('UQ|'jr0X__/|v>CކQ&b: _n<cg/}:`p91,*|8ߞ@n>2qtx?|9.UB{ j_??[@8z1];q/K,Gկtj i0濞~L^E:ˆW~oh2rkWq{Ϻ|9|v2+ :太h\l ot1/Yy@tl9p:x8)F4,'U?_L 2e:׃ǣP3̋8%@৵ة2c)w*^ΆW Zν+~|8b摕<~~&c\>w:ysGϠR}<ҏN؎Jx\_gh ~sEx a8՟}9~F-\|~1}͛>kx̧dSyxJtOϳMWmIa6 Q}dÿ]Wyɱ›UbeV 'wghulCĬc@-t䨍"b7W \'K]J:5#ԝ F+0[2$:>eE/< 3}`4ǾM} a;{IFU#I &F( kBxʿ +8tպ.KWk3^/y'Db1hpJO1_6)S|> ?f9B+L%U`i~ ҿ'8,QP[gd3 &;iWe`HU A`D CQyeE>+8`M¡i 0AFX:4 9qjǐ nMLM(njWv^i|hDYbLI"B7RxV$ d +'Ƀ7D'$CT`^|]3+Ƙ&cz:\9o \IRsզ.TTJWWGfľ{{6gj ̡%Xz2FaˬJZd=М~[#mڤ B":9ƂmD#bR SF]kZ=6haEhQ9jM'^j*cD1癈H,e1vZ{"`!P읲.NG#0 =ۮ)vpZ.s`^"Cuv@)deG )(!2[ig|C:Q1n-=jTứ9V!"_2SӼ9l{*.gΌUf)ZhXwLpE(֮xi-*NtLkǴvگk11].pRHkQzYtffr \p l9LCi*q n;I[[ vi_}^pW菄`wBHV9bu]ߢk!޽EVkD5kX~#*DV>ڔص?AONXp\;[{>~WKs8~zn͐}j9q맀gMNONTO:K4-r+G;SN/2 <4oL7qtF [d[ôS9-FŽ Au[.ʭ9v\rtMO&h4E+'g[5Mj۝8!nvns>i%pw8_Ԗ k ^΂k&`\֞c:<OڼR+ ^79ǻwE>ݟ|:`x ''ľ Zp G|8iBO Jt}C?Φ:֠'/kyGɈ?'m_/>:OFɈ??TUO ;'/2ka];`uW{ x }/QZ!RԳ=5wHGC}zGwΥ4n4pX|XN=Qj]?gu{:/O?Ҙ 599ldJ3!z  on*J>XȊ@ὡ&;D?ܨ=kN B<rؾE1^s܂1:pΪي{?aM4Q\sw9MS~@l&}4MK˙ƺGCjS׏dډ]FK#p^V|$z4 J޼2odP^q6(ga'B./5Ij`YVe [{M6RI*;^ud$Asr $k NZ6{.n)Z^]~.'d#ID UAu/&f8q1\o@ :i"]A-i yh$rzS @(cHiK"BIZzM)T<>:[:9<|2DK/ABU[-t|RC NO˕DP.vi_htIcmMJOwȤxtD)F3s=-A9aRJp % b 9v^u!YDZ4-I2"Yeז,#Kp-}d^JC!ߓStQM׆ ZVӫ,4E/0^]9Lx"c$/Rq]փuQ4]' dJS8!TyV7:;ԐU1ɚxI1qKS{RvHy@3(pw*6)R6,KgWQ~BR5.=gl\"R4B`[18Γ1PQYc16Y2r9k4қjb D%MŀnYhRJq$5PD_rGq> HE+0#RV29jϮ|1t"NxO)a4QX@LX؛7q#_\[2ۡ9b'ޝ !裶c)oU:HHmMwK@O?p+bϺK'*TPsk7k e."oS#(k$hg tqCiZkHqk:)爽Ovk`>0@7|>SN¸h:Ԏ1xg*T{PjVBJ\oܲ;WB|5h{&lު)D.+x%w! r4~Ob)Rfc#YZߟ8ͥ$+ٶ{/&9ݩ˒q>H폾;̙o{Άy{'72x zN7~ OP&pֶ|gy1U}:5,+У w:JAM5:g5wATbK*J ReΣ4˷в 8 :LW;AMzPEF&Qvc;}[dž02l@s!ʷd$;HA&ul ~zЍGFUv؉GOvؐfCv E6.?\ɭWM./G'>3.Xc/sXxc5gq8 KCѵdAZφ,)rc;w,5b#"EڡL޿ٟ4L!9^\p@ ˗n~CUm c*,ITLGFLrtʊ;7Cm/Hp5ߎ6٭09>znd'mdm&н2oO3GWV,H'MM64(1MJ2oȏJ[8\P!$X١wrڜ_ aR/ouiI\]Ϳr pp6O''qx~6a''~ۙ?&%;po6A'y~'{M~gSAatUdȮH^ bC=ֽ/ֽOxk䥟8adlngL:,O+o9ׂiQ Qg&RjQXͽ;KLm޽e.mXR75Yf'm0Vlevk!")ßۇ-aBC U`Z ,,uMtJdZ3J't%5uÎV"?np@W˲;,;!?{ͬG^/ ,ٜH4Ԣ/:[ۛ?lKkʺX( MLKyp$HQ5\CBs 4&!*ѐʄ~Pl)5ܺ%DHNE#ױ+Ihܮpkc8 +#{TGRX)( @(,>aq,c#Cq1ik%} zdRɵ8ais2i#*4CWˡ hY|5Ru5*(h&i~;+AqvgQɔuU}hcwyT5Oٴ\3N$O'lzu(PU0ZE݌:g)G.uՈ3V5m#?팦^Y:!82%Pָ!]j;~i H){s0Q# G~_qk<*ihy7 ڙl7 k^MO5/sHj>$5QROɧ!;$5Jj~$yHvԣ xtXܸarwͰGt7TgmrL|1$yU412L3'*U#I&~|;?هgg/n3┇;,I-@؅ya9z;퍗Į{~ I=B &$\!&@VPQsτB=9\狿`J 0AzIbܜ4WX2|BA\e9ZPP"J*ey6H6uafl Q1!L*-QoT1fHPn2i&I 3=` _Ĕ{t1|;c-Ͳ.F\TM:ȅbc&>"~`8`JޱM|G)㫩њʳS:Z=ďa:]Z㾡6/*{^$aʇVZn4T !Fm){:oa,`1:9>هr+rl 98f@sv 173PIlLH R nʭ]K*wKST|W|R^񄾌fDO%r XIHADSʪH eP1z\4̈́P|63+V˳Jf0TIA P> ^`7yPzO1 Rđz3a=~|-eo7aAJxdT}V9BXPǪ (E7'kg9 *YY@[ ^ d'c3fe8hS>?{v O2&C .h(N JDi 2 71;*YMWWYQ,&AB2c0q DC r_C)*^A4NtɂG[}X,‰L0$ji{"32s33;bC4L06V2:FtWcuNќJeкbK86jCD t MIqj1c?+}i?P[K۬913??hvGH嗮efqGi[biu{]ǧnv{ ox#6I<hEά^mpΡwZY''n( yެF|BnOO^mP9hv?@IMD1(JI&">*^ugG4]eC hǂ.Hsg.<Pi3`&c7I|t{ȼwgWS8/ӟn&gaeZv{/L'zkL2w_A>&s‡<(4]JނN,߬'>,Wf*Iƀ IOBJf{>ݦX#ڬjuk]n+{dZ=,o?^m>5WN' 7v0۟jLک{kwl`pg/W=Mߙ5l% nw=aR O ~VϤ97CǜQ#GYd幊Ԃ|6)"lHD+?S;ڄ~X vkay~;KOA~I!r|WLI6;hR{%Hq8oԐIȜL:_?KπA ?>J"?Ə!\yJ$We1{X/Vz.dw=}՗ 9KCsgvqdށɇaurF3/c#2"鲐CoEe$9N^ABK6sOk)y,]3eqD(L"%% RV&VqPS~z4ܹ$̃ ,/RàA'{aWO*wEw,:HQ)h 4۪k3x C%N*lFzT"j˗@f@f!btuKtrQy쿎.~4,8.@ػYx[ea/ s%%`@@Ak0€hEU* U7шC"XX^p.SZSLRQ PB QΊLZ ıX"f!(+`zje%谍Lq&B@ خ%rS m^EKJ=`uR7/9j,WSb#9FphP$РȘK"r,oS[1x77']$P31!HBp3GkxI2ў'XMm:f [0Is~Nzsz{?n|Iy3CZߟ_2l/? v OOf"VH/Ηug7(OO;F &F&4R% DuxUXV'm&vztv?;|$ ! S"9k{d^ϓϻx3r:t)+7?H"|^}Ltn_]_Mftc]Dޑ'zEY/g,NEϗ_^$90MLDv.6RPE{OV+ܡCȓ .xD=NtCU8E(*s;TRK!ڍ гPMa}#&KLKLJH~-tH:QW5S!uHz*k~B4.% `$" $EB'(i) R3C `+ CأGd`.5!KBS?~)`XU$WRhFʅ$c`%pLADQeP[a!!zX<[T7V*ő@FÂr)p &3,dSuwaRDMĚ;HT.$0 ,aRQb1,B &l d`8,$b,^ja{&;ɰݝ2>83#Is AY9bok2ߦbp>̕SFKfZdV@a/E c:WQF$27jQ:@ kRȒ̑:˩P% GY7ح2GAxLf$4Є` dUS* ÝPtJ8ٗdʯ])?JäG,g:W :,UՐBC CO=ސbaͿQ-sT°h/Үdibwh37æ}NfvnkK2=}/ٞpOc8*|Kg;߶svn>Z3"3k@\Bw\6g/ΊY:bN!oܦkݣVGZρg3lv؅GoO&5\({K<}r-@'AX+nә5\AZ" I&z6*ݘ HPط(PӶEiꋠn&R[w>[4(qѷIpcH|-%g[(5?Y&wӀXws%cSrꞼAed6Uv1i/Rb]-圬tZ97[U©%tlS 7 [U l`|kW|qqVU= ֹT *E՗NC[*K,ՃP)A2rx/oWh2L>TXvc'O 6N݃~xӧ捓G5X/!6ˌPy0Č:Uak Tܙd;;'0,́+WJ^5_xoT.cw~urD޽xLTpX0F&lZ' vbQE"`1\D9+ӊM k0 KrWf~ڐ;*P %WQ#3 5ݪ'dj./[eiu%sBjE4ou-n2L[Vƒ㑝DQz}~W*+u^bG;R KՎKZJ*HږG[ k̟!(wy'fka%$#RĈb!hg. j[v acDs}:1Qb֫XqGa8BVj3o,\u2){QnxKr;~< IhS̛B\9{mEJK) bS.Rr%Se^V} c^qvֲnZc c;(ߊ8.NK}L,vLn5!Lo0 sR IY0"o{&~nj$PU(Cty*Fdap:@1"@ P B$42Qp| ZRVZ2u!.AEԆb8"Ӑ Qb2Ŀu*E;%%eb`UQ~m.Zbci4mcʡcL9ʘj;vOKq8nUaGwݣB(j"%= dBxړ~=.ݷFAHΒ}f-4"zlz 7"b 1+4iyFa _o1M捺ΗR&L:6VniPiCK",P8!q^gQ, cg(רii]1К;DqI~f>>粆|61]Zox& =ݝh}WO%͋NIW*0Ds=[~HW1Mx:qooW@"m;D>ӣNܟLgkz5 =#\kB? ]~jQg`ޓ1хK3hz>P<#ǘw)Hp,Q狙˗h[8T 5,FJN=20 yy(^ꃲ3".,c=s]^k/= qy'wt"f'eڸk#/U'sꐗ**ysλ|hPk2m]BomF'k]Sͷ_KKŮ3ur==ba*BM8m[x^8mALq>;: &.BUUu;˚@KO4Βk?>CD@2tڲsnpTO%TO%Sl;*Ɉ1g2ډI)E!#hH9byh%c&aOI/u" >L[Vl{v6 d/Y"x匬%]A|/Lh0L4 J{7u3 @M\JVY+u_TK|wupypsTITQJr_񡛋E$Ši8El<&+8Ԍ#!"kBJ!:t"Go,UPR?Ħ87h1ĈH bP`2#Y`hwrdA}6 I\&}_đ&$I$kQ) `Y{ ܱӁsh/X+n X!P(~b n!uBB؂0R2ETaʍ8xD4G͹g+L"i+%(`9ACkp\K[ [6uCߦpxq>6v5c7KL=e pMXطZB0NQ1XNR,#iPNZwWkwd(FqCqjXllccͅ[˟Z'|gkE t5 '4l5PR'MVK0N^ױӣ7 w |:J1qlrdn:N< |6Nߎ䷕0^t"Ji ;%h;2lj4v) @~? '^8~2~ e/(Yigu}6:=ɀsIortӳI}y}ۇF d|X_3\c& M2^Ojo/W@OsЛןLʿtq=?uR![sº%E&z9\b3_y\~"z`1䵳ɴ-Bu)X;O.Adh\!?MV tɣv^gH R_$f0Ua=uLdO^'C=E~|(`(>Ad$f @D6֬|*(:?0'RJXDqL#߱&k~/}]lkP\c}) +mnH/ޙ&>ux%DC'9u0(Ko- DȬl\%0q"W' _F?܅Ýo+j(+/gc4|/_L}x@PVLal[u?7"Xh:&p!s {!ee⢼Bb[T*Nڿʴa_mCHW>CJ;BMn5 v< ~ xAyoDݡ7X!|Jc+~+ܷsFjվJJE),Tى\QQ֗ ia+Kj[ń([u€' Fz%zChb?FH?ԕ q;\Xx?h``Q6f@,~_cn&@>[U>?9KaI9nY7L?gWdv8EO_6Ը|!T,l]ul4Xm49m<#S.ه_/ x!Uys2sshUe&q07+tԕ@=b}X FUtpG%$y8'!`N'׏e%9|,j[.RDoӉ|c,WF09_sʷ 05o ]@&f)t;<5FAwy.yǐGy1mT-/#Z%mҢq-t џG[He81 Hh7SA61pkZGLv]|軫w6Z?cM¨9[Նy:[iZ-Oqݔ64Rx;Te4:hW]}=܍Ո>:t)zT`bxy?7C^v9fS1;3R: CCI$:vYB&q] jcFҘi_,flG~l窽ԼTga$.mmXSv5!y2e L i*gZ_t(yM8ZKD'p%!J,,k hă *Ź q&$t`6W=?a{L-I1esú0ـ-5ey{.vE#|!8N'SRR^Acz u iH][E@x" КXZQ_GhV#% ,km&Xicxl*\fEIOrl.h 0J ! Xh6 k$ ⮋JEp=zv^+ߘbn i8L91. CnI1R/*\ZyA8㺕})DJC6I9iF ;[s% JcB)\s,8 9,`dȨ@F ,{9&V*l:@ha9>vQLj_bXM`Irۺu,F[n録XVV} ̯A(9nA8Ȃu5ahJДtԌyQ Wۀ8 J!B&B#7Z$U._[aY UP;H ZÉA+Q/RbЇ%=%=%PbP{է*ĠĠYh,WWyI^x^=Q1\LdԖI|yA[2p-O_? Z63 &H8GrzNĊIƠyF G`/lfX 4z8If%`dr[.Xa4)P`*ƟybT+Ay5&L˝$?`(ĝ\V4m󱣌:{m?Ղ%ץ w| uk3KU` IcȐsa1,`b=;dh%O`lkXY`/+`UÁAi4v*Pz$)8/}u(5S*؇mS:M^6yC[V=$YfCDWun,mx ` O0i3{-ûAYoɥb"IhckX't?BZ͹HR!UUŏ#O#D4I@)]{ {'nGPR0m:wZ]ֽyQ} E%@b_ָE<`yqHMQz=Vcdf ];)ݍ,|ڍ⇄OyB3Bf>-C§<5BS犆zggQ|<LN!&,1v :: TEv*!5rS`%yɖ,߼|N3̅RfbwWBr7)&b`,Uk-Fd8UUEllj:Y6,DurWeV&,Gf/9^rI*,۰()HI}(II,VE8@I᰸JҌf,F(Y؎M}FRG/^-"|LK|[%ax_}b0"ѿڙ/pX;|ݥM {[0ku[s79,Hd02nl]R'"+ܓzb A)3l,эb@ci}'i!=Fci @)Z6rtIGi|[#~ g3ݰ\iJ'WmP?^Dz<2Xy 5ué,ABMf̙s7oj}96MzM@&[{ ?Nm IiY9ktt./L2].{Ox))ѫ\[t;4E%1*n?~N\ܭJ a?8_%^w:턊"g"2@S kG`KX̘mm@"f?)F,ufZ1 &-ϨPHn%|R^qȕB=rU=*kO>qIĕ_E|C;o7LL"_Lٵ&Bތť_C] b#i_wAIa#nc*P{t %tQQm4'%q)m'5huDI.ƺKJjW;)Nl`0to0F@3cΆD<{H='7Ed)x1!C_ MX8;餈wi$e4R\$s(Mki.GfQp1/BaB y<} !Tq2!'MHԗrS̈lcNf֠AXPP{UIkٮTtJLh0L,Q;1:Rsd\ΰB⽢[ɭlpaRi5PW ;,!\8.'>o1 V3=U8NpR,f4 PJ~*g Xв&??)]A1,U@1ѐ~DYLwIhoSk.ެ[ b3c!pђqXI<

&0rB 3_Vxy>\`<B #6{ Z_o _W/oJ@!/@Ía|o׳g l#z~( '\]'}Gk$AJo ~iNb~+X}</ޗ?nc %y:M7`3Xbq L<FF{:x& ޫKK5~`waa.`2l3ox68??jۿ=[x7ޞݔSo?ay24G5wql4z]ux"+LOw:O|;*~cL_~ >°Z2㚝+XF9Rg&y1P;Tj=TO% eT[4mY[ 2u:mg02S%;L%:7Q!a0ΙUpeLƕ3l>+@2a,`CٵnϺ?$_* .ۢiu6(zթt&@C]juWC*,_ ,U`8[7Bwj|'Q04U9'qF1IJq]8`Pp4R2WRQKm3bJD Ɂ )<`7f9b X @q.fï#*ù 9Ϭc!#9c0i0yzfW7\-| \"EH?H믟ű~=m[d]\7]$_֩)gHz8 R4҈(HUE>@hD?p|Nn7qhyEI Dt1 cq <,Iwy޺{=ֺb8>]Jb|'W\RIJK1'* l8?̹)S!AL%A *a*)]sZS~\VY%*]0ji+8:Ej#*E V?(!e-ơDp ʝ^`yq.~*D@;qעsae3̸//.{5 ?沿hw(4B0ݧMjBB}ͪ'@R%4a%YȔYEGxRBJIiɃw|_\XP<}v[jp`e#p.Tg9ڷPZeHVXMin[ib t+Q.P*\B:S)@P:IKXMR^I䶪6JeHȽ);HOQO'$U-P+q<)'ZNJ ,.>Xɲ%Ł&>:.y5rt2Tي3Bn V]~u7؟ w^DE Iw6B yjhgm8=H, Ү#Fn:8ZKo B}wx ΅VZETH&>%°kj[g hZ%)M.$=9I*}2ᝲ1R5hNRێ"lqj|-'rŶ,皵88>  mn>*LQHR6Q/{UjH'٫4m)&pz|2SSt)ʫ*ڣݣBu:vcTHù~v砕]?p}5d{]W)3G{#hiM^窱GpU%>=d9Μ+*n"Kfyj鷜V jMd'"HI=c@mۉ)U#BMk&,eˍCwކ(5;t`)FOyVYR=0*q>I*5S/:*:; bu.J7>j%]3n$ "UQAL}N@ķ_cH.AZZQiRr5jlZ+^EFKա+ᴅ(Q-jmӌqޑ94V$[iw09j@"u8 ØrW{/fG}OT~^ w2D*m0N1%ky$ _q0yI! t#5U|ۚ:koƽE^M7{eԠ氱:9BYD"T;u"_m U^ҙ-NG5Tb(|R) a3Y qJI`Cnm%쪼ɭD@kkOh@U~=f_%0WwГ꤇JvtpxaDOUVۙDIj1A+GhDGwiK);oJnΕpi4vXǶ3&*,Ԟ yG"HG NA p6} Gi'k(J S mY9%%h>PW$G[a|V 2Z#L}[Hstd¤ Et%Lk؞&Pٓ4a=Y]ų" Iհou"cWp@-CV {0^kQ:l! LH (ƍe[^FpЃ~CEש9j`xF\@gcYAV:a]&b2+006eʅL9-vZ%Ծ*}+`Z(ߌ$qB%S gTȄ)I!+2Gm#Py>P:9v;&`$^&4(sTx,X!)qgAZf&36:0usO'XSj-Mjw$e*M,1eA=\O( ٻkc$In ¥h2LOQ'`R4"k"Q"POFcwD[a)5ΨQt4btN@QtrFqND)ڒ@@4B1q٨KyyiqjSGD)q#0 >OG6g炞+btk&gg'f[ pPs I|=јsV-P(PގqyM>F#։x $~s#+GW}a:C`8q=1?daEY7E>}Mݱԇs-Vrp|@O9?u8Hfal&ch|  l_\:, 39, Z ʊ]ר שYᓘv۟ a$5ԂCI`yaFF7|ګ,@y h~|TgUG?:_Ffh]2ey0}Y71!͌>8n賚\WN]u=u,ok`0($K4fbM| K%<e&0iJ*3ٷQb %N\q!(Tr: 5CL[^wa?KO|Z;W16'Ҝx$=wn$S )P@e )P@e:qtkXXJ;A1~Xmu93_oxe`e^ AD]-Z櫾ȦͫUVq⚗0OΏZylQ l6٭lַ#Aܑ~4X!/lQk/ep]P. Y.Gv*]<].^0gn~ON^'VO-3rpZa-܇oU;b@هm=. D0S,GSq$gPʅk4IeLapB9%DoMlMU)9Ml Qñ,61oxֶKk%V%jҭtlB+Ļ_W@v݇x:ap'y6n:L3:b!vJʟ{Y ERHQ ERHΤH5c,T-~-z>ùp*Tp󢍍<+8( °B֎aX;1EUi Uɩ.] ʓL|Eo>o/ޠ7.9#%%8HYib:mƌ1idP6SSd#V xrDDfr5t2,5 ZOPB Ӈuڧ`rurx;^t~.&; .7V㟻+O,#X $%`k`dA0Ma7)qnm^<}ktAV'/vD$ cI9DI֑ ؄eY+2-3MfXE|+.M㜠S,Q2`t(9@…`mSzkK/?$Z7X$NH´$dӔ$`8/(B_E$\kiRCK-ʒ)cT8(%JP`FJFC&l|nqf뇣ܗH9bʉa9P#:NHhf&IjaFuSЮ#l5G' f va^_sQӱߣuR= ؆OHMYcY?o.WQ:Ǿ.,‹S8xeWMbwӛL_cgoOj]|٠7ƽu\~a]o\3K@6lMxR=H3lbFtT Q6FJB,~ ] ݅_"M430c1$+zP@V%i 0Rg fX0gBd(d_4U b(fTئ+ M[=oM+j THLδLδLδ׳YDZr>QSOv ^85`1]ធZz3n'5W1( zZQ(&r/!tz {!{Z' >v^VΛt<ýd/GjsxUtr "|N?ȵMۗ3ȷsUgD.3b'6M^d" AVUGTOAZDQ77Y??f Fcخ_d`CKtq3.ĐB;/}8jd;,`2 'JN Sct>ZJ}z9[Z gpA@Q1(Rŵ\[r%|6C7䖃@cY G\m\^CתW߿<QU_,vJsw+n--@(9|)`A@@9G *7/ͪO.sS y1&SXeqqP&%":\"ùW_.}KG4]hRK9]:< cWj4%eGb< FĪX$8a!XP˸N* '`6x~Եp DsFu&`T ?Ä%;&FmbӕH72k(f^8 psbNA-%Vc`LaQS6 \ y|t#%S HbCdSI#ix@Y@f$R`h^Bq"q%0 JLT"K8bOF*C)!#^Pi$Ԣp(p` [&)i!Y4ƬB%B6^5H[]f~W'G;O}Q;\.g AHL|2SH? 녗!c݀S|L=k7O`R"7Ǵ?@lg9*'ꑐo\D+Q5vԬ ڭ*EDjeGɔ߭@ n5H7.dƯ]j7H#[E N6Ji˭GI8v:ڭ EL5&hV"S2RPΦ*vCBq$ST7~/dW\ jmnROUhQV|"ZAxY䑽rr"Ӌ7Azh2MR2v8Ӎ͢,`*ѢAwףZ鍫ٶnMͪlY1OI_QtK?<]9?ɓO݉wf- ~6#nWuJ^]=I؅DW"*XIlBDꘁm*HA*P2LX,sP1~LzFd)oFekqn% aPVdR-n4ߗD *BdWQeW#HLsYЙRoZ,¾3^j d ˜){ګ"TyO 9{&'_dwg>ccPYw:>hgi V?L&&mqzcQd@l<T"1 iP^igiW*8StFH>!௽Wo!{_ߙ}!{?~{nU!՗'CcP*‰ b|Da3No_8޽H}?pi:~2SxxaBEz}Vt5Jv3 r/iz)?ݍAg_0;=Vtb(;wADDD)z)yY q[#w%L0MpDǮq43ݽG?iT0z~RŪjԫ@2*w)eML}^5zmH.B.q^j]|[KYyK4;l#Ftl8\Ac(DOchgRۼs&+*so1Rm-'+k/O {⊡fO eO;0jB Y;[l llQHn/TmkC#'iAFh+jCX,(9~C$F6F`(BB Q!kq\r"ڑX#'*%^ϣxߍg+HT8&egSt @ _ǃ8hd]-j 8a3wE,9S4oFߏx3p#͖QLVk8Tk:.?}|h`>X ;kÓ^:u,mviiXXe7}uleA2VE䖉B~m.(ZN3%ӌ

902(´rYy,IlJ e1NJ SkAmiJǬDVIksw㇯Ԭ=,=Bvr!]T/d*Uv^sq[}rpҞ.\_)uec6!:4.IbۜA}I rosO3X S@'d 8|O_RLEk Eˍ_'4VNIO U>1E1KZX}8F2# &v±2*[sJS|Ckp.np m %ш+Ն Kq%i˄ex”6;/Xހ.  vQ+]|=%?ݷja@3=.[[6ݖGs\,%xqЅ[ g#gf*hF)M3-cg6ZI9Ȏxb.LkN&|s@Yn~&-o[Pـ֛ѿZ&59OƬ_f\]2\=y=noy3S};wĖlո" ~>ڛh^AU3 I07Wac/BPsto5as O靠 _n﻽ZbzjOT/h craYD<ԑ >;+TfO|tYf6o| xC矻2gwQuL ƞӻKxGо% =;@T{dviɅ` 7{x:%s3QWo֗S]s۶W4ponǴ~h;M;MLf<ăZYR%Mn&D)rDZ-2b$A;Gȵ?|p(ƛtϜ֜mqDJ^R*3q[{_NL $;SA74Jƛڪ(MCJohZ}4J-$ڗn8LQ"FA7J?""QjFP%]3%oR++&iz%VK ȽgN: `ɝ܅|?ao2úar~jry6S/vገEN?!O!ȨE)/|;Ju{Q7P s5 xkxnzCߺp*Q**fB"E>Lu!6@k1\%J;.q$I).GKl Kie&+)VR"1];>\ Y2"9 *+]f`DVbak1>URsWr;^÷li;S.cZ=*ڽs*/'=lauFS۳,|¨|G{pcIڳ<78EY͞E\׳K߷[?dys匔`6cI+ن\8Oΰ IN0 V#g 3I ``jP (bL-t! /ZI멁A3)C|3ōs', iAI tc'XQ7}u\;)a*)}|IM}ݥKԥ݄Z]-ӞmQ|N}$l_1%KRJ' e 6|;|x٩_Ffi ('t̴,NOy7/=)/ޞ]O޻_ypL{GF݂,g(N'pOGpK{_NM?=_g{ρx8EAڿ 8&Jʿ#p𱗥?ٯ%\XTܝ|g>07<Ҹ]s{;E#$i/e^j|umet|aF>9Uc,z~ꭟ\K?cy㝿9-%QyØj4ۏ_G`}hlq`Dok[әe~w=&Z{qh4lP]8K/FףѬ]ƜB3}bgŧMmϥ˥+zȚx _r`\/h{6MFr8Vprh}q4'7כʠ8t000BAؖ:_ {=.K̴y%U.-Vxx.ȥXL;7]lLDa#A6Ȅe%[<\K=-ׂ P\Ow>z*yX2źv/᮲>,ʶ s|D,Hخ$+S5inNҧ&9jHe,8ˋ戤zvX ~K\}SZa'R-'tA܏g.n tJc"1aD 9kuGR)-ݓA)w0RiW3xu+R:CLy` ܅q{Un|s.)vRg(ɸ :Md& 6`+DݝNakb((4N%U+asv Vm2Wfj^ ەve]retv]iWfڕZ+31-+3UJ+3^4]6WfһWUQbBH{|1z`ggop-d \AKAo@YFୖ&ʙL~Yw 3{Y~zuD1Rؗt2^w >[Z0gq/Z_iÎZ]`ɿ.I1=أհV'AI꤬UB%s F qewe+ *jit LǷuc{8r`m0^ݽcQ޹7\k I.BkwJ*d%;BZ4”&KLGxl'A%f~iWp wo5gT^u׃Uo}$Z896’ص/o*R3+M݃c2Lhb{~x(۱KŅ./JB?*apJISB+,V4HsaU!@!Qo3nÝߩ?Ap3ܹtw=LpΛNO~eGHD^ );{N7(f VB i4tp|C3h`>^s@K$k'n8ЊmLjrn}Ϭ+7>^HlMQgG+mH /-6#}0} @-^P{fH6!9=3 D")ί"a?{# ׾}y9P$AȭAgp<εosSlTmqqw<L%Z ysȝL l!v*9 3/Y~6 BejesZs\2W~2 o'l1mb9& %r`i-Ot1WbVh 7 +qn.3(^Dm(` zaFAiXbW~_`I0†TѧA|ۜBҵT#w8qoMg J wLި:aԂ"ݨ}\jE^Ur8Q =O:2(GGQEՔE>e\r_q8ƓQs㌰IXOAaDW,r+Z#u# (x>3v&0嶨eEY]fdb* v6~jijX"ʽXk9`h9d06Y8Uom1N[Pb:y-EU}/O-Ii9p#*sz2OZBmFF^dT }[|B(cQU*Wi8U'V%L@ %V7 [/G{Y a}@ TvDsbI7 x“ܪ>^93dx6mgŒ}q m T@ĩrҍ)1ȝQk: Ac̘ۨJr%~ z)&zv<k͂ 18ʡڷYk Ksh?XP9‰ UDJW%U^+E1ϱ>lD axB<Ј5G)@Xw~Q*8p-ޢ:t> UUbJ8vCJsW%w/O- Sƙ8mګA'FoV0RoNC@Hw%i1#+?nptG -K.@Em ^MoߎQ8jOlAJ׍3,rFמypgYާ$({ s-(C}!9ҭE!ZN2]n9\('昅avaG7nu'Ӳ+xܣJs{'(rN-)sk].k .ק?|kic>-&P}*\&^)ir^5t;pU/c-Ps$rVU7o_0:8x8)aQVtdj:g?xUk&^] cf}{lj3oQsR(XJ_nfΥ}=*ԼzJ؝M- (;aIf7$.qA 2p<]'Ͳ=o1L~kAŔSſUc:պaMug.s"!bj'#ez@;"}nd ^H l9Q!#9W\Ǐ,i@?yةh#րۘfۊN64_= U[O .:у~65_XJxî;#P.ɾgbz汞9Xp-(d0S]YU=W&XmS)lTW Ky+:=Q9|@/5oG%bd7mXv[` t4:ӂ8vb-z5aSn^|cmY!<70l` "慀K" HcLTLf&\G:-/)T 1StLrLT&;[sZ`GaPھ񵕯]꬟VEo֢uLP5b P@L`A)^km !/-Hb?X1-n/W5gBt0X֡U0ÂA%dN"&rW p60.l9`t~A ΐ~_Ȱ(IUIAlC>8-"At0 Z €t~;p0ҡD(dS%Ō0e}|%@oN9p5tx!GMV1}e {~̺MfǹD &Fh hM9IV%wN.*}z±Xs4@컾mpW׷vV R qaUgC>ŸʝeoSr`E~- ',WWۨ{m Mg^Շ}f=rp߭oi0HX "BKU S݃fJM\:ܮL(q+bÄ4œp47GÂL$<)~O#綆@s$ WgNqt#9 R Cml[9M%. 砽SRڴhZ&c5 y KwD<8 DnORu,a971wX}m2m+g嘜`Gw$pfc],K*^rrlkNT7DzhKV$E9+K[H"iWyg@n;!1&3`tCړςU[Aw ,LBhl&a&:}݄wӤ fv{Y,̙˕9(pnM9Fo=2 ȟ(t?+?K1bFȇ. ")[}]Y~o~"˿<8 x_O`8%]5ff'L5}8i  Ѱ_q!P' Ε_0un"Jk(Fo"V!\b%e4 TDKhG`2>a!T QŎh#b&^X!Tr%sPaӚZ洖xuWLxְf[_{}ˢi6"8HhF?*zJTp U!h_޼iI AȌ$sw4^oӨĊxxSnYM'@>Ojˌ:G+y AyBpBLHE`_h#z$Pz3NDžv㊚7J~2XLkT$Ttʒ4iÀ0$VBDT!!qG; 'jo #S܌' QV. *8fDN3X9!Wa:J 5Ŵ䲥W;QW23ID b[!;fJcΥlE "Q^^MUX~̻ћ0OyЁS[Ye(oO{Y8ISwj{;'Hw1~$~n+Oo$ὥuќ֝FjI We1wLcnCn⁂mY;q+ 00WQ$"lC̓w˳줛 Jې#5:HQ=S]Z~HȇX,<·ķDEYswǬ({zl^0z*$(pq!f9Y 8&hz;{Y͆@4\zS,YӜ(0x9H/jwr ư:`~~.gKXCm'3#ms uN͡5J^ceotP%7EjX)~#__ s?q+'kZ[s&ÇTǭtul^8] @@$H$ %I zK>9_hPud[1oI2i1k5ǽxS<,?7LF wο5ZW| e'ܷ[!6{uY/:~Y(8ި{Bq?Ȉcn>eB~bvQR,B=z+PFƱ%L1)U}Ҥ `>erm\?''8^(; 8\eMQ/%ãP? 3x^/K@Z>1Jvc8Yðϓ9wXڑkc;GS]sSp:b5Q]?3aL9;/egǩ;1ag { rp~;Ӛ(k)A(s=E sMu.gFHѵrNu3䀟r4gtK<#hQ8QsjKFP%nEDzhS9cs4Oo"~#CS8șf^L2!xU:YM@ 5nF>1Rq6_T{b='H AnD&-Cn}3s&lh(p4q~.M ʶNƓ21sC8KҡD 8y@   a0 ڭo#,!=3o!ywqInTbl5a2ay&=0fJ0˝ Orjo s"M)7ˊaX1( ' 75C]/5_j-u|vs)P1r[]q I֩=@b4"N{#lɍ4Xs 8$1P9ECeR>mc{WkgM359Hyԫ_q:{ԯHHޘ p$r"qp(ArI&"S0NdaDwSҔIz|yqsջ·yzc탮za2=\+<Y)6\I7k"mqms-ҁR=h^7ڮvoL($QGzH5I2@QF<)Sɀ #< . 9A P^)oz媔 5WUr>zz=viH\::V'a*wדs~Gz0 Ĉ_#h<+|Tt2%Zuī`?sF>AHE^q[_-:z AoVFwmHN)Pn+LMmJ2uJh"^IcrݯAAɒ"9q, $_7;yܰ{o!Rvow2 d=½o1je d&XOZ:7Qs )HSCK&֚Fqf" k JR, Y:, q{-)'>!NyYxt^1 AjagC`6eM*:%rs4|]i_: :i:T)lX+s&zr1<+&+@nx&O]:?H~R=.1"˅>0يi!@g |5; Nh)d0If6OfPԘ%f=|5ni*k\Z.(!d%Z,~ afX^X(]XL-Rs#GU63&R\9&LY!^A3F uyfVm Ց j`e)+'*S.~MVWfp#rIA)O쐅` }J^Jd|3-@="!ϡG CJFLPﱾ&' HKL&:^ ,J SJas]P$zǚ;zϣ?ê{rm w.+2toOo>6/"")g%6uxB+k}/i4ZǶn.e '~_ZOQ"Vgsr MPag<F /}C4䥙O ~op8ESRqrfBiĨRoSqƭ>T\]O ֖oEM8 CGk\e.\w.:W[aSLKvHpw'3Yjnmp4.P #H-jך 0+P9Rʦ,$UYN\j @L8N g`mFWozTQq sWndX΂^)(Ax/dz\bCĢ",b7l˳lՈ<˔tYɱ8` IK9@w5OJq6+҄i'ϣs~G)U|p=.{7]24|y,_^?1v;ƽ AyD ә\W)<*qw05W2B4ο}|;?śC0p1dx2sW_͓s,/=}Ɇ7~|=o4 /"B u6dL_I( :^p_2?~j{LAЭ? .V~_* 'TXzp],28>ʓ;ɶOX>[;!W)ΕOyj3S3Q8Ⱥb*A \.e)(u`dsuޞ=rC6E̥3%&}{⑹x=Џh)x]2Fؖ4EřƧϞ?۝vep]ep#c5cÂ@)5,7*눱LJ~✧YgE %!Lr̙T+ ==ɼ1.Q^3_Jo^=HQ%%6* [1GLjtk!ZK\ͺ.(nFF g99'BTed[d2&WlY-ꄦ?r6wq#=BODxkTPljoin 6ԭvw9;muݱ,Me0-3옃ߌfb$F1SIdc.kvup5V $~!岻sA8O@r2BA4lCL 6}=1VKGycæXse%}>1/ɸEcD|"(*nyǷKDL ɍj -cSׅ@frY)㮜*RGܶ8POxz4㨫Jyr돳%Z1La z˝EM˘0%1X%^bY) "fakka# jPpx1MxhƣE݂`LCG JpáZW鵙R"n#RZ˷`\U:>8%!yE(k6̢\ƁӁsOZbuQy2`Qzf`k6vdfQ ͢oKjށr-0ؕӬ6o!kRn٢s-_} j)\)zEVtòuT܍T?GQheNWcԂރUWu5 DJ݉c偃G\lvjIX}idc@:EZA+ڽh{$r$Dg~8aPىny/yF/k/1&LI=swMa:[Q?I&W'ʊ9)s_ڸ %-ʏ==EQ==t29w`;é-79EL]|v5D8|;N)UAnFsEco/uaʐoEۖ;-PTEw)Z\5n1(JDF͡QMGzKN`]+zʖ;naj;d+*cmB]F5o)eSx\`]Ʀ!:ⷶ]GQ(988[io=9"o;31I&;RAOwU@99.6Qw&S_˵294>5ňKtl97wȑR!/ͱN(OEm# 7RtnjHBl7ך^@Μŭr@wA ØSэ^k~?X Z"IDuMax7;ܲQ`oy CN'ݭ3(qwVjqQǶnVٺs*nơܭxbF(h:mY4u-AV 4j_⮼DDB\aG(VXp "gp?rFkj}s#.W[ZIh POEV2i$(&``6W6i32D&^v3Ed#3O6WLZ1oQnFZLuiR.:C4#R .!ʆΖ湴yj!I1V2˹UYÛ04 Z9 jD.@cK%A!R8ȤP%u$׈*2m.jYAԦP8LP1nryvuA9h7i tV=CKy2}: 8~jR Pn쓛G?u~:_\`#XPOP]f8}۳~fLdQ|fAXw¼M̕(K/1UgՒYOya¨W'+?-f98 !Anl33C 8\Ƙ3xCFm61fXN3a%oOA:EhV"UFs/dnTă%[C) ! HeiM<W쓁Π^  0X~0~\ZyYnܻYwCHf_M&07"BE( Z'dS/IS"ԋgOߟU67 "RDj"YƓy7zy /VY{nf}া ˻˿{eh!Y)/q3)&MԂ "mMݢ6ng Q r09נ#0#бm9w.& 0t3o< 5!  ˌ!b3v<˘ֻ`%+i.3dN" 7DnJ#H1ҝl(krCT] `8s9GBQ -"4\E0A:i T<5#!HsL38qg 5 >)qu>w`vb2NHTޠ$Ec{FW7LC&2A #F4$o5)˴,ٔHJ(YU:g}ն.~>`ݶ}yۋ?v\rZim,CI0IM58B) t'<I1Z:RѢTIFvLۤ+\>Q/ +툺z5{X}| W)mWԦSEeK]עW8֋傥+F%*R `!5Й1b0BlS'jP--mHLZn>y}([2XXh$!C 6FJcyvk7Ԡ:_3`1(P͠jc14B$!ȴ(kjT״&^9c T1i*/zoA䘦;GI'`53;Z hL[ ~#U$hśJȧ \kYey71{fŨ3>N\eT[KAurp|i^z<}qRxw<}w?&Nٻ*-4¼*fV.櫴܎hj2_mo]WifM3_>ޝWh aUh*-g^Șż ;ٳ3U<kdGgf`> jbw;x%f%kl5>)'!}J+g:Y:߼ɲއyͱ5&R:Xٌvaaua8 `),լr{jRR끲a@R$\vGT5A8 `hgFJChͱ ,Ys_- mQaja@RgibT l3Kymh>÷C/&yNcM\JjTb볰ҙRaB6TXжmU1XmڕfCkdީȾ|Jem~vNҒ}Dg6?;QںAnm8E_ߍ[(_k(W }oۨP;Oc6gD."K|*- yg0bq /N$05MLwϑ 0n~ _cx؇GL\ۦK1Y+z(C8gKJ-VHY"e ds̗GIѥLUZƛk*0-pRJ+q_U[E8} G%ʫ2![BfVG{+<$}}'oF;QXBt(Dk K~on!+"b9PF-BeYȕNHO nyj0@lzNc UJjnhE & sa';$b!.;'rpU\@${`}x07(Au'DK: DNU]8HdzeKIX^K^UeE?}hAA:~<>9|,v|wG-~{$Ӿ_C_\γrE,gesڀqhAKp3iG҈Jש6A JT"|'c[qE DKzq0;tI=" lw&,GřNiM{'D(:Togzso+5CȀ8"kw"D=@cڿ Zrsr ϷJ!қ<:ꈠ\QݹzكSSBHzM)aKeqZLƩ`4|ƞ[oD)RB[طs &ѵ(b,//Q ( >i'EܼԌȯYg:EpV<*f1X;ƣDX1WK0BX,l NC֝W{s:snl}Q*/a \c5 Lї^7/;Not]KXa(:K?ƬefvdP>_5قhvt > gk@5V;&~/t;E"an]+d㫄bDg&`]EJ P_3h'^1e2A l0 Hj呗>H %x(CG8 N 0u?Y L*O.O0KlCrauPzoڂ1s_Ή%݋7"EE==OU(q;IAdxr<%>O˞'-ǟ>덻L,·K 0c!^}z}O}{;M{?}iKEo9t3 Ɠ^ _3KwQ8|u旦#!v^/#KYڍ `.[XB¬k&D<0|8;M(4hb_v}9h[o7 oIi>xM}kI Gzz| c·fdiG3{`Z>xW~7b .O7 8"%^9W~6 듷~4_w{<巃lM-(L0m#/}2=k_Mi?_R|@5ʧPS(^X/Oz|!{1ǯ|[&?cWMWzrΦqōF/ {pw0n\[xp*rC}{]bR`gM;f>^7 g%wo؀Nl6q1O%iJpP>a8$'$oh9;1Wv^);w΢:8,:(gMY AH4RD:oLXQ̧;)%2꾨V^ݗU7#$WmXtߙLí (IRBҴ/xs[=ܗ, ANy=Y pH)rj9ʰ ED-YF7@/s\nNoovMh ,-x_RSovϹVfA"LwW)B] jFG h Gcŀ40Azsri1"< I8tGowt^J v8W,&7 zSYiXbT" >wQ,=W_\sB Sr(Q<˛x 1A |tr@ Ȯ\m_yzIoMCE1G\;'xl$e+&hNs1\䑎 RkuN nl-h.Q4ͩ#N9'*rzS.jSF%WvܑHMFͯ- R$旙VZR>Y)jPZ E6~i%:$5cbԨ(J#!1 Ā &} c,}բ=--Wy¥M=SwzJC%㑢'EKXK)vi"h KAײ;pB4>lg.{}f|L3ɏ)e2_dtSfTm~8=u3l;)z\l4zIf4lx1ζ0&an68>[$ ['WBl !n rB^3i{qcii{;c!-i/6i?Q*: {% ok>$n*ښ?ǿqvF ^ 8ZɦNqlz콵up5^eu@0*! ipі+>jAD$aaMMP4t =Bkw&u:DǴmU|)ܫ(Txi42o!aFjm!z{<|DT|-5dudb)*ʆ z) PeVvK Fڮu` |m)s6dx-]oFW}鶨y?8vM/" 9D,6!dJ,y(8sy߹C㖊CR n@k*d[@uKRgH{S u\zmh[+%=GZsږ'onm.w!׌%HI~WM/ZXwsx\O ʽ&sp/ɹ)X[<._ a'];m/>=[p [f1,{֊GO؊5b2`$ɑ6nNW7vʫ)33Ʈ]v /-*\L/8kD&9Nq,Ae;:%]TrWRkS(Ԙ8t,7&; Y@~y}̾a5I*@EcS&Ĥ=&b8XK- B0P&JQs`%$ 7_gC^O>۾ZMVzTf\1Y9WF]5!^UJJe)o:vmڽnF©S]n諫t(V6}yh|.TmW8Y[CrY{9p7`{k%UmIvk?rزďϔ\sOp$glP\^O/͕~czkB##_KJY p N_L{cgg-.u獚ar|^~ݻמבHN9f C\8GiP)&s<}Q:O1vZn=yznsζk,7w]Գ LdLI.Na<-io\MmXaf\c=$ UZsM^F{nP!*Iv5DkC6`VAL)dd"*evoQ7/ߌO2/ \)ZTi_]!9lë}yk m,tzSF`=>#ºBdYβ&dgxCvwk{`z/^}&nC?pk.K`JWaÆ| DB! ~7&d z6PTzB2&`īo>M. eP_=Lh4w=.,\iU;o__=kZ=; &\ +a/&#"Qz*.jCd@gJIyǪ5K>˫O}A(L5۰du;#[;J<x;. i7L)ג`?S]`"fo:3Y{-N M4&$IfJpiX5J(!1Ėf1akUtr FOKvi pH)2#ЄqcHET'B'`;&AiK(')|X j4k^_oۻ^2_$co&izIdk~c8d>_ < rDv2O}o_g<˱Ɠn0y[@l{2Rr' U> >/35 vLQv>*z bDO:C9 jswtwN <1$ _МyҜ1A4LW<Þc7J0Hih)r0TBPQ SaE(DB qXsb лC 2C11cg aXx1ɳ3vϸs =K*y2QÓ2=zVHXzݡՇd-Զ^~mAYZGp O׍yS2WO( ne'^?G.e<=&w/nb(sRJz4Xߙ,ʾ~}="c컫 Fmlzh͍Iߌ]|iAMqmD E Ѧz=vS*kV`%9|)6ǂen9MJ1TT2 kr]r߫SK2@߂섙]i3*y^BnJڙ>v=xͰbH ypR s{./%s<i2Lu1փ6\Ȇ-HiNLc/@a4fq=|އdž7@ (J}5,DۘaJ*ZH N&Db6B$gX+4׮ yi,_P˜I,6I,ӡJbF2u!cPXwVs(@c W4 ᮦ. qb|st'7-54DF“"bS-Lc yT i8#lIJg?|[Cf3 v"FO=}xQcfʧȝYF 5F. ǭ(~wQQ#YQ 5F1(]xDx(T8 jt"=ubt*)W" OS{*y< tqIм}yÓ]7@ko: Lc qD{W6s8"z^炷E}ΤO(sBp*ͱI@8MqNd&86HYf_k=LǗtHc V!y]JZ+$?PH^QHGG_H;JV!Z 侅GW!y=#:?B򵹻.zY;'x0eOSO=굦k@KIYQBy""$R<|%kKxPH xPH x'8^@1 %x=T5ZJ 2OP;ځBpwd!8Lh1|!\p!YH8sH8s 3\KhHH8?skxH~6O~τ3hq^?LG\Ns%-QI*BKc9_4qG.gCXk,F=Þ9@X2>s}Xe֝ʭ X?c~yE %T+Blѩi$E*s"8ӉV)&\ĖaL1x[N :eپQuI~As1qO"H K@׫ZZ2r͵5BH3vs]av|zrߝ䊣4y6nQ8Ƴ+Hl@ovVԦêٺh,Qx}ˈzG73T+%wV [@e{c~ׇ]KKr;C 4Fa\g>!Vf"ۨ5ȶcek3q] p zy=LWu8߱8>_ˆ{6qqE<j+`y0t`WU qԊo0cbcؒM&Fp8IX*IINpWljNwvsu2}fl;/wɹ$pA%{h4DD~s1ASqX\ "bWIf3s^%vSB J]_<1JËM섵zZJ ON&-p{f'UlkV*U +*ԃkC5 mbgtk%ec_Mi_Muȅdt1KW=wƹ'T/]ٍt2yNˎ}:8ݕM٧aE.Rke1tHf+6s}E"A5kz4}(EVr8yyrsbYbaY']oN}uX%vX]a'%-m*a0#`w8o7v`cq8Mm8WF k '8DJݦ+/`%^$3;1Veb9C 3zt8< P^D¼ R,n'qURRw g~2M:w(`M-yX2fEώ: +\=Kx˳k_6u]a m:T&CVkಊcA2LPPRhacξ pV.HL.HJ XXVMF Z9?9H-)d"۲`*qs eHuo7 mYpH2ҖHThyi5T-jHvU6˷VCVH*ZqXqPʁއwl':{Kh1vu7t︧zZ]=2d1Jڅ*858XڼRt?ylAX߱͢dӹTdVҽT\JL#LMkuZsNxip+[x"u(Lتi bp&2{%}+݋]f9?eIu*;F:=@`;ql܃5O$ZσHWaf<,~ +G7_:hgy{]oEOƟwt9?K~_0_ GYD1^d¿)|}J~-> *rp>]/1_|hAW*FssA鍲^jlzcQ; M'^yĜh0\zq]uއ-q0 w/)Xʥݦŷ|0*O-Frpv[2z<OGφ̧m`V; ^{aϯF.̿?W=~z[P[;.='ZN~I;<[YE!gBeUs.>9ũces&z&M ЁC-!>jELU}2%/Y #XԼ:bv6,gL_SR4ݵCg C_`1/vE#۔#01{ RmJʞRRDQl*` kÏލJ6i_h@í=í=ܼ=|-Iw8z_=L¨eH Q6ۛmU0v@vA&\]`7Z!$JC1y.1݋Q9OƐ(F" yYRgiTw &EBzAˡaR$8*0D(ôL뉒Q{Zb||ǕZB\ W0&M<1Z͂fb?Ltyrt@f!pL ~P{r]X N5ZhI%-@i,;ˣ0|?%58_#ZHA_ԙ?_NGW2W&S9ag/~; [N#N#N#NT7{uA`ȵUd0QPT2:-rޅ? MK(VB(u 8#B LͻkXKBsDDUTb^b痛VSha`Jrui=6e+1Q=U⚬'1'0kƇ҈"aBT2A& JcR3O $x+]_ 4SZT#GxDcMt[doKLcXk>Q+oX< 7vP >2 @ި+I g+,B5PGQōM͓{-kлή -1Vq`6aY^ρS9N^1Խ>;kj~p ӳa}y}x*q%—2^[ n_@J.'~\^MdWm俌vs31v>Q~{۳/Qރh9 IG^b,<#͔Wތdz j,b@)}6 W K+gWߧIq; Li! 1 Vt(_1[$eA2s@63!lStYNƈSt{w1E76EMmStSt>72DmW01!ۼ"JzĈ6>/ی'. R隬H&w~Eqm k)P A^ 9Q,[Mr\xOV ȁ`{]ckQp׏õq#ךmmulRT{xEcGAII@# =2p.z"'/>D (1?j˂%ҢR͕sed]Yl ؟A@u"-ʩܰrֺ"8cڞČNJ8JUNm`o'9e-+͊Zk,#.#8yyrޥQ}_ }40$r$xsRYO;*qkCWC-0`(`)Ғv*h=92YEt2Y-GQ1v  S`mQk3o{v%gցYDh9X9ָ0t- +BW)Nb:`2QjD+0$XӀ4 ?!OtoA:J l0RFk~ILՂ#(ҏ}FYulauӠF36kAp :=|ƣ2'.0orJ-mLE{3I T C@SlT`Ɔ,h3VQܡǍ{]v!0m|FpfoXCD2L hY$3t I0S ިR")7ZZ 6bqc$L@,2p!ʄ2"3X"eiIzHpy Lu 3w" w:Dzpvzbj'3ƿ qbRDO>ܭ1. WŶU:kӗIwSicc9eX$T*47QI}W*B:0\JJ)HT6v_j!n[Yc A8/ra)*DqQd9āg},dՙUz]m񬊭ӫb4EZ^N&7콇:تvꎔK*IkT!0;P^ȁ! j&  ꛿ A }AP3Zu_ !4H:w!'!w^[|Pڒ~kNLJ_mV ?9_]X?,9~j=cez6|!f=jϣ+vce V+* 4y|{[RjV Y=aԜWM=] TWrwbş8Rc2?lO܌\ގ3xxh6~Pfu?Lg>tƣp5~Q(&CًuR@4]RNtTTnrQ݇0Z33BܓWVc /g#_DJyG/či6 $EZtʗKql zEkjC z`Q@'~'Fa*F#&LEi/鶵HAE ?{Wm //LK/?sp蚴N4ءkPnZ0>:0 ڡ"7 Ϊ{ݝP7y*hz@=͛yzltn-ep4/ 6gh/0.{if4ZZ9y_nQws='+֏;3\Hpf?V%[貮H+V}Ҭ<ߞt5^>6ӏu8ŋ;4p7f;oUW[Q EXǘ1YԉvԌ/ĵb.b)CAGyۭ胮? Jm%mƉ[i)<;FR*k+wO5Nrm0~򳭴iBIT_ql w8jy pT՝Т_^\B:2t@NI5թ og鬪jNt vQ;x kpF7@zr: Q>yTa8 P륚 E) Iww=udb׮0NjKX#2/h'*ӳV >֨qko»:I IT{3v}\|gAhkEBH5'=6ݍFwJvux%/tB\e+5jZjEFOv7jQK<{ẟj'oDa_~QCwO|b_»ʑ.GFȉNO~_ ؏-4+/qBknh[Gej$hxepphtgd`P\lVv5۰V08 e#h5jTfZaZaPf Fb /P8, Qn/B3M,̨ FK(yNZٵ4S Q:KV(|oSIj34ګV:7?8T(p MwJRT˓egZ)M7^UCL"5g|cw?&__?L,6uˍ!2`DA,(AY(lysV>r/V8 wpTUBGkPOV;F6f!Rll:7(^9,3Oy1[H@ r IUȉ`i\]8 WjB*tnm`΍XmiJo4' (j"Op'Hs&y(}jy;ZzG>2I}x_*vPB R!~]7a}:&ڄӉR$(QOSrp}Vw_6ә)ܝPYw ߞo *M} 9jH7•]agpAN|_X( u>kQ]k+L֡/l'yR_ge,C"&Z{Z xV[ڭ?Y 9+#R™^ޟ[+-Oul^^U>s!BueizvMKALMZBNu%JQMPpWt$B 9Cz|ts/WB1i" B>lx"O8_2Z3bG4Hf|h?G㑯\鿦ƌ B``ޞuN>;?E^ֿDQ&gizh÷x>a~/=W)W)W)WE/zB7~t (zct=Kݤ}0 9f2M_S酐ٝR϶ Ւ0I˫2J6:M!|)|u0Ύ>R@_ mf( O?xYdCfޜ*y ҏf3m\BUqLh❦$6 dbb$AIL$_׀+mA[ a@+c/RFk"c_IʬI4"G)X\㓞4>Ce~՟'oO^'g_w>L9:973?BÅlj_$G\x(cXjiDc0p8W& %ZW9O-@miCW %reL4:4ceB)#/`G!.ax `LY#h'*BKt@JLBD)Vsk !:8<~Ͳju~o'4K |3?+vp^X:WMm=O2}i#Mg꧜`ݯzvRϐ(` g\zr?+P9ů E4D$V׬)K-ZP ʈNhXfn)6f %[hL)luc\vbPFtBǺN9bYnUH7.A2EP糆M -ZP ʈNhX2²1ȢuBBq ):_n3A mcb8-"n-kݪo\DCd`jٺu3P JN`{Y*iDڭ[@K[ $ZU_nNLC1(#:mcZ\)m݂ZֺU!!߸ɔ`_}n8]n6n-N m݂ZֺU!!߸Ta>UzMNXP ʈNhX6W^R%*uBBq )m'W~+?:]kVЯW V Lc3L+ejT* ~Q=ZrٯTWT R ytԚU Uj_wVVW+.ѝ?jpWKjRGP-]5xJW.M%eJ -U*ң=L#`&PbgljANx" į"rR%4i&_X*"& og1Z9gB[Cʱ@ +B|KK@iG`c%U)\e퐡 ?EV<Fk':0#$AsAF3 `$Nz+l< ;Z4as UFQ0,D%X.} h`B{BJB"JC VqA" D*&#f [jրJ,C^"TS3mq'dw9%sfN'hw^5 S8 ?/go *Sr3!f|-0Vg3c?zŽW?7K1C)f Ky0D"6fᲩ2mln(${{H#LX׳ 㵻 rY>D>7i`~'SFn8=@٢RWA-|qq$by n\?|vGFϣcN Юɧ>ծ^sUFAwx-mb,F A1fpZQ(;sJqlpZLHNbǽ`s4^&/,(!3?g`ӎgթ;;Li= 8YywL]-`MI)?r"=pټsߋ=2N3ʊʽQ> 5i$sR)2Zr6&8sn^M!]{s۶*{ǔ~h;MΝM7mfގXweɕ䴮7}HIdMJ3DRpyEQUx^~炌kI:#& ~~>_K8I ,6>'3m RgnRgۻ]Tl,O:&s d.//O-F8s3:lEak$C0ަg;oIKSWb"ES8/`S¿+7OBQ(ޜׯ_u,oJ0;9ڸcWD!৉5I?ζx/ 7Zʶ"zWѕX%mE*ڲ˄PZ"2 RB,S#U*^rsgB!CZTJoŸPC<0dЇ1N 7|'sy@{y@{^7q8DGeR bL20'3l F!0b4e.eOa<s.Ws ;r|݆z 307 zWwڍȇhU*7[-q&Ww\xg+tT+G/fE԰X3 RWo`u ;`r'kN&5 BN'' 5FL. Th)&c8uaue Α4 S"RZ T,: P/`<4P?.0x\|z{0•SpãwaƟ'|hބwٙZ6D$f:~=~'WAǽ * _φ_*X'_ܾ_wv Ľw/CGH@msBƽnz?o|W $/,çfz^~q:5{xte>灕ް?= (/Z!> 1KoA=;(ǗaҏX;dzw/=zpޏ,nn&CӼ*\Sh a/>W=|*`̚nm~Zzƴ7>kwb2&?o(^z!sᗳrܙn rhA^Ĕ Ú/TRc/AeYQ+ %ixZ#%F7vm$rwsBlsr@Jtlךbݦj*HP?0U)C\y y 3Wf 0;. K$bN x EXcqmVn+#!D qEdbZQ#jm5EH$ ;>}49~(.,VƎ9˧zo@dRP$WMf># q~гˬLyD ,pϱP})C6V'r:ºHq7s납º1vB[*1k+i^Lr\ZB:Ŕ쌰{.;ʍ?ijŸ&HڊIuTnBz;qf J(Գt;WTCED 5b(Z+!:g&wJPj Ced>@?Шu M"~Du+! NcJA0ċ4nRӼ ׼8R f=t3׃x{+݌}"U(-òHIy6pЁHVp)-mֺM,qJC[mga+s_[H JVJWbFZg"]0xi?#y& ރi>ɥ@ȕ&KpF/)߅d?o 4 #/c9!z\dY`.Wӣf01]㞒Z\x3^-\ ֩,?N&bc]OB14A*tnNk掼rxќkDh-bMI,Jxa %A&pYLR=־yYѯbBg2/tE.acƢU,*5nncnckbq$kcm,Ec[OT,ڂ1Eӱh2\#T#Q"0zYl@Tr%0y*e""-D)%LʜKNR:7URV&{hn[BFo+µ.9FvAFS;VX/" #(Ɇ6EP1(A=RU* 1xzVCg5$QVhGƫI:YQY$qXi1{+3\ha+S4KvQ90 "[J=SM]!K`&ҭH?.%uxݥ_0|j Zc*qh?y5-JbTwתDhUK$( H(p győZ[Y$i.$۷Q1W[؆ZXxet[kJJ<7 1*)mOS&z)kȫ@.UEQr[_@*_EPTHRyfSьRb0OL z#Z +L&ITHtaJ*ZH 6eB(PZh wxGf tRE JM}^Cv,>tXbUcX_;/DZfF&۳'FMN}$π{w>E΃-rl`m?J$Ɓ&0N;i4R %14]J97KmHqu>)+w~pQ׺ :EļO`z=9~+wӼ¯zv+#w(מ ^^cӘ.ﭏ8r)퉀^HV.\b/*)営[}e$^<2ɴ{b^X8-2{M`/.8Sb//QNǣ% B7^A E/++g[L;^`Jg5Gd᰸A<~ s`}4Wu,;X`~07(Pғ,I9 CŢTdfMI}HRJ0)=(̻' 3t{ \X.drGV= 恠{j"oӢaV;/bi}FTU\%"Eu }@Nz(ЧvedL-c*ūTj7f*V j7g2}$+b6/ ·XɈC95n%fq1dEmZ}7)bHEcq>bJ!DxaoԿ ˫#=Bٻݶd*B~Zr/#v2 l4fYJrfr&u(n$\:MUU_Uu{ _gLg@^+]qNT`ow\sOrI)WB? l|anyipS=4B\3^phrD8&z®6̖')#L|O#=XO,8sSg֧Ld&4,C]+QqqI.! ?>,Rr)Rյq m w4&\c)b&;1UJX ݇{/11ﷀ2Ixx%+(:jYIhujx4ӧsF-CN%1rh-ɆO[]B"Dt:r!pݛ 9gR L`.[0&"VR\J&%wGK!)aPX!D(H 23tz iY)#1BzfLjh#&塍WXjQS~"/Vfz\e#o`"R"a 3Q9B34 ˀȺ]-Yi/ Abukt Zư9w AդBR=2 މcUK %Z\^j(1t?F{$Mϴbދ$uúΙJkq #0kމ {?C˧Wa9#8I6KXX.0Bd5K 誉qW7s;MvbT0k-8Rsb4ʿ;^`^/NfxwV4-:vv}p)&6RDa~b/ eRp0RwQ.oBuթ2툻48ڭzjOVBd9~Ӎc?Q-pg^7ZFZy8 *M>wQ3B K~b"pRl ZB~h.kG5Dcl!d˯ceR R}Y-L@]+Zd$u-L_eWfӧ)15JY T#TK $eJ 2{Bڟj('oH֔*|g xTAu1yR#"V nACtJy cBqM/# wZ[9HXT ENdփD; $:~H)a-uݷ.LbI&n}lQH.aRˑ\ǑBm C[L+F ZƆ+B) D 6$MZn niR }Ɖ:܉-L`$_g{(X ypFoP(݂w,>`}n`9ATz8LFw/p~q;=QzD֎>*y b;wټI#3َk7Y|s 5;/@~vmCx |_Q%ej1G̰69$֟xzT rg!&Y/ *N9IJſ'+,Ѧ#={"Nxv1<]. {NQ:uA :q6Z^=_~oϯ'o>6z9^s-gWp~~0SƦy_f0pz0_%0/fן ڽty=q.(!{Dt+ËLj4֏pfϒ{軿ۛ t<1m4.zx]=ݲA\JAnp [h5UQ.:|fé7 3 rߨg/uvIY6wY_z 6Oz0J+P z>_xɿO D)t)w331BQ?e?}`2<x#t\gcz9j蘥]`W5_i,9.=g'yp8sYh)I+y?Ir;/LD| Y&4 L.iڏsB_B.~Oj$oo xRq<7a`?ލ_?^34Sc=5I|dpe}sq_26h; O;^YPJƶ~eqӵNw55=lΦ]nߕiTYy|7DRѶ@Jv?DCrjK4ȅ,DRDGS8NDp*Gć!J*FA6 `lm1˭(\A6+*"YVFN(呢2p% ˼ޝ@rBa(|B(d'Ԣ[sb.D1^<@}Q' a>~RA*O^~h㵓)'\ZzH=iMHAzne1)3\Х!XY "p6i6Ak7hbYXKD$o bP{ר܍ Tfg>d ܹD\hg]s%)&x%/.#*]q'9hO eEc!mEP!\ pfLbҘ0"ABa`]lb"URzUQWtܩgD6CRN䮶U`ҿfq K?+'Nl.|_g|FЮ7{9@DsLXd9l$l vxu; =_҇}Ȩ^U_}1n}\h8[csĻ# v|VEqHLŭʖ]Ǡc5G S/CP鄱*dlP dlP!*PIEH edU){rx; "eIm'g>gaK#Ja aeT,SDEi DVɐAD%$]|ؙ'ވTjs!A( AxCöJ[pJ PπF!?Asz@ ~/VGvKl+OӉ ݹh[EnSR,ή?9ꙙDwXGv4lmĻڜ{ KFD%%$oQ=EKp}j+期,an$.֔A9)FXeli^.y>XLbgN5뼠OP UFJƂԕD'+bzQq!Q橜SXgL')VZLb,dL)2)Zepա0s!1v@ЀVtV0OxiiZZ=1Ɣ'w's AB>hZz/Hƣ=TNMMJp4:$e!''aTsb Si8’ 2 7@z[M'e`9j@*] :!I@3us*H&`+LRaRp$Zrg,քJi^Ydd>]Ys;6_'63b#QzW/kvDy9vl`\vI./48| ^Ko&'lqJH7%t6˛˜m#p.ݜ #WXikI4jpZk.G4Chdtj Ep,xg":"gB EH gm#4dmuE=dlNEx޺ހ^O5 ܊7ͫ5|8[-yҧG &qqÿ|upS yZfݴr Miۅiwu^WTnLvY l74<@, ypLs;f5Yq<'sO>H֢q2SEd.vIU2*wG*.}[f|xxxw6Cq ʹw?:7cGTi !nGk4Z:a Zȿ飑A?=pPqBk[W{G_<ҡ}Z.JIaD<ܜDSJ?$v@PR hP&r q c;Lm`3DYLtICDޔcvaSOvQ"c$~ۚ`Qsoԍ3HBUS |_2"VU\iHY[mYbhSIR1HD03yeBx&i2F"'_Zps?_f557-5w0áM'bL+1#O5U1)E}ekVRkRA-!& m3r[KC P ,ޯƑH8x@3fBpJZ)"Z*Juc$r$\ǘ ~ۖR 4\tٍ 3dJr{DLXs+znq{5 ]Y)xK;ܤW,w/? SeTii~tgg7]\mx^:VQvVA͎qE'㩢^z,^U/ _Ɠ8 _ʿ}~W,>i/_ FwqLYݍ^LϾ!,mKƣw K/+{ƋpUS zWH{㻉gkoR1- & Bm`Nɋۛmڗx8Lejaz­AFo>1G\xA|A!;!x?=OpwG:]!L {_SЖJdҋ+=5q^,7r0Zf@ ?.vvrY&ԗviCݬriͻ,VܸRHtp,Vs#Hu4)-MJ)/`JS% ?T lWVLq>0)z^(qV f(c9a&MUMz]Icޖw,Z-C_ UZ?EgѼbCR7Vzm6~BmDb>R Qq$@^'}n:^QSX$yRdjr Z;$̆C2~$8< C2zY d|<}48P,&'/XIui d; 99`eUQ9ڍ=[W⠊RE5XWhAs8-BsМ4h}hnjMD;4򄹖' 0Qg0֞:á@w~ %B4tjMOw}6lw&hg)VOw/@HgznrS[I ppR X!E_נ 'M0#ElѺ4 itMFk%1 3Ǧg~cEw3 ^#!,  YֶěDV,pϭ.\f;Aug/`TQwȟ!>5Ȯ-Bv<օxH' d!-28 g$3^cLȮ+Ȏ!8gy!6ZjO;2`a #?V`وQl O^>`3 x!!![=eԽSOO\65%(L1a(63_#o ۱G$̧/$r2jM~(.,J?h0Ίk'gqX<#LkAcq˔#p4YbLHŸ{&$P=raR!皦.!J3b+~\gq~SE #˿BN&M}: 燝@,%*$e@&&EJl6% DUzO7@!G:?^_}#tf4ղ?9Q?3}Z#逈@w0MpöuAL+U; ___o± -T9rPdƷ~l NnW|&=2\ҪƛJԃRMP%NhJDIֻܼK@;ˮ, نvȱglNe&*r0&倳;as"O>؛;^S7~z/Nn0"}jU(rb֩hͥޜʉ.U}=& L7ʯ\<8bL@F0%DUj. v=]4ӜN@eYbgkub4OJ![Y8nմ5_#]L E#A#FP_Ĉ"aJRXUK"lBi>Xy5G5+eQ8cǩ0-yggyokoɛ\ޘugxf]FzfA 1,bE{yN4-Zɧ{5cnLZd{mޟtӆ~ռ*[5~_'ň'ASH&}A1D'rŎwU|fڛ{l9B liU~<"ծ(QeݧƓ!o{7kgnlDJ2(?^ZLn_iU_h! u:P7,ZƬш`\ L9LG0rxp~P)-k!A,jgGS5gy%Ew<G!pRF, GGg) R(&L"!D3vvVЅ PB1qF'0yݟZFݨX=hTSFŬs+0Ҩ؈+Y4*̡}=zdC^z~FA#Jk׼~=fD6Ʈ rI "QW6 VX}F=X8 U]7ofZ4\R`4RV\ X#TuAQ"h?4֦Qqq  qNN .vg.6(okH3܌>iL]TݑBWfd?)şiԛy");u-0/3/ډ׾4H UCTŁj$tu5cE:h/-͂CeL üvU˝(^N{PJ筳ܪ*K IVGbJvBKM{ mKeX5j3jd`=0=jC%imT*0;M\9($IPO+9MZ~>.=to=9-|o^8; F@Ucj3V`=tpE'lv9~@怬m|kscBPVWާ jo߳}/ŷG0-px;McVau{j#P Bէ3n2}QrcIQUYrWݫ.fΌ+zQIWWM8ϕ)Ӭ +M劣m_ܼ^Mpv-8.?LN]jn/>YbֹȘgIoyÞ)$3<],Ex7L2 y_,!3py9GC~tO$LV#>zϬUפH[ mDd6/ڌt^,.%Yn9J D\Y+Zb{ *zʟvus~B\0  #Lu?7 |[m#@̕^T!Sv2_ 5cJ( r}9 "0ɓdAnaֶ'1Z- dt᧔{肸[1M{2}U1bϟ̟Kؘy4x?(.<[׾9XI'NB|~윰`_Lh(c (,SzE3Oy=At?0-!/̌s9!c%8ŃV~)R8.^k} zJQrӋ*`GV9jGTQ5$ן$T#IC=Oזl%T1$(*, 2r" (w$ +h2 Etؽ U^S"*;ct7 B&#CpmFWrȑŤlp9.R*'/*yJ͖$<ፍݶ(Eq/:GZ>ʻ5C.nZ]tE2]T#F(Uq 4q#g ʏ_uxTXPhH7I МKB9IJټxoTӵOol$*WpֹYMk%ƕ=ԂÍ+{ OpRS-tJx02ˑ_|e7:4!ON[᧓ lfVsŏ꧋ .7:rZO7"95gB1^0N@0m9CU2ڸ_?7Egۀ!w~;M$ؒ}^s\<5_4s7<`Y8C7m3TNQ- 743LS&^x#ψ am6xqDGiHyQ݊jIԈxD53CK !E$hj(-(uڞCH+dr{W(SّP:fS$%y <ے@Pz| IȘmI(( F- (ɻuQБcْ@Sǃ}y84cΘF-NF+ F:#Ic%ZH}:hNHD3,c.6af$TFy ;:2|$ R ( \ ʹݭH= ʻ!!cɹaYd Lոިhp,59o02e#gzZ#H%?GWO/F﬇sGW`d7p*i!\2zG,MF7"[LK8V{j=&NbєxȹưhInWʨ.JE>%wCC"lq";ܚQ{#GX)*f kL5x<*/GJ ݏ>. w7T}TE_HP` U.,{dq [G sC捚lUwmbZ!jj66 6$ݯW)5кW L+?QMc zncC9`_iS`kb13* 5iR~xUpajt-3-燥8B"j RRT! \HB^x25kmmKZQGbyEtf/0N|~@'768`w[qCL4N0H1+#r 'YL!(0qb=@ :}HfRbe(m0/^d&G27n1aK= QK)4H)# JZ GgUnL?mqM[rniID:ԊA;D ĵ.? 4w&9F}2D.WO/yqQz8 \q2p3jpG,M#Ҍsp/%}'pe֌@g"LJ1˚/̩u 2.E&,=[j+sVQzęJWrI[e79@9.͹Ocូ?/Ynd\@,2 U'VXc'J֢@. !`:zc"wH>ECnpp44WB3P32^Ro]("į`] 9ZiӪb+`s{[RV~Θo[j;"/E%X/Xfpϋy0JfTzlk!)H:'#$'ۻHncWz98'gfYd/ Sv$^Jʼn작87-Iǰ.Wb*k]k C(ƨ)r$|YzayT^>w+󛡋nJ bޯi--gJ7]qhWm굫y;Vk@|MR&շj.(Ѫ)F, 6 Pr;70^V"kˑ'db.3oH^$ }v&HKF(|+rc `,_v?KR;cL5rLi"CK~BDl6BZa*H7RrT'C` l^5lD kiٞ^5Qc.a謓Mݟo>_\Os]Oo*f?:_\URi[ab~0l߶}@+ bqg\pvk;wjD||\>&bFk_8^ لV&)NmI$:$.Ỳ ?h[ 7ߑFStO:rD1RtD9-?Kck4{NUG'1L/o&f+o?NxZ.... dF=c4ͫ 9O?\X_ǭцX爷[@.|h:[]^zQ^ˍ,A>{:lt B!–-8hgU8C.@Q#! Mn! 댞ON=J RYMT4g̵At"gdvg"fp i;cb hGħK*n4ׯ9'x׮cSƋ_4Sϻ&!F-6´ՊM)J=񥦂4q= lmԆBZ6p$5tCOT/%>D{H7q:zZ?( i)h*/)_] _ЉM_T7wus[< dU?_ZMpyU{h|]0!=UG\7Fw1S5*,ҵ-+W ʦTB+STkMo]+KlI !>Uç(Q^@fNMj,k;#A+_i)(BFa9҂fEC$P$x!@bz\#Q~."rU ^[Χ2Em(i*MiQ ݚ*鼎>IpQsDBU-lyd $knpbs ӓT %5#VۛMK'rA>IQ+h !6G9-U^jY.W(?J\rD9:w@oWmOMNmݓCLK1CU7{zr5F6CF6Fi&b1A>MtqШHFb҈e40 Wⵞ ~ڐ`jTҶl鲄#¶)cj$NFگ$ !I2KX-.B[) Lqi$$hIxl87}Q.80&we/IhML!PrdRLAb}zAs JvCmP5B7(SNH4g $ШQgtYb/*RdpϏW\~[I ]y{XGs)f`a'*{p_: jyW['ȸnp 4RzXwu.nuO-}Fԅs m+Ra aUa80a.p&kAu+/pv%=|ueBp+ppxS{t;8 ߫ @۽Ե[2(ʪRu01S6UhBT`PVFTg b]!wr->tް}ѱxcd'"2[8#p; 5'8;N@YLo!/;; Du؟θ$4uc-OI1Z2nte*(l(x d pudsJ\W`[i֠R%mR,d md]Zií. Q v_ yPMSH:VUSxRY:-{?_vnҩ!i|kV49s +QN-öR(`e:B|lA7l>dO”KWE}m"\yY!H;/dF)rwj ;IƵ!%'$.1T_<U/y,=Q#nY4cj,Jh4gi:y8UDwLZPL52K#%hHd7 B .׆H p֯ ?]w_/3Å2nrm/_/g./خ9$N)Ôfgo_wK_~o|׍YO9+@$G,tvNHY`9;ı^c_zlt RX0X+a6<nSO[S8R  @$t: cvG.?Ż䐕Km[n娮Ҹ4n:FvNIJa{ H+1C9ijd8>Jx$9BO%eCyzbEP(cǀϱj'Cd)%fk1u @ξ<bd&M9;! cҵe$A0T+Nz1*8:{r#!F!±.Y7G ,hejv9v<5jRqGKŅtV` wQd梮ϗ?cX"C!GaU;y̸qӊ"qplv樄 U_m6г)/)G)2X00OBYt'_O@0Rq xRqvty5Υ3\7ĚMЮpI1"KfLFSe|+Q P0JQ{3_$BHN'U7D\Ai=p C#19\8a=Jlǡ;I:Q 3ǸI,X0"@~M3x ti6٩7&eYo?Z=^x)wn̫oíSMyY}2so4Ͻi|7mVڂآ0AQMkD XY+jd+=+Y5Vog=[={-7Z77/m͈LE9eY1Ng(k) F(pO%Bs`aUmJ50=ZDjVhCZ/JMRz' /d[/-Bܹ#Y ^JkH hi]F9/RViܧMŹV܆1%@MԲԒ ^S܌"sb>#F1ReUnXm o*z>_*2Bΐʭ GfF5M2`t |*gMȌ4AY/P1YlD4Um!^:,$%'G:M'| 7Y4ɤގ)yGhRL>5n˩ݍCLAJpkn4% n9,CF@;8Ege q+2N䑑$ oΨ\/?Syd}4W'犉=j4x3}4+، HpG9!jBldM{)* G~&:H%2p)w@tuxr33f|2C$`򏛐 :\S@)5އ\8m۲-)nn:Q[۰Ȋ!w(Y6o"QEĘNgE: M)?һ a!oDsl *l{7MGr11Hn<(d-?I3һ a!oDg, yLc3ӓvc3)uGl?3Gj??>=~< gr>Gݣdv ?h\4~褱 lSm?^9^@eS.(cDǐivar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003242233215150374075017705 0ustar rootrootFeb 27 19:34:40 crc systemd[1]: Starting Kubernetes Kubelet... Feb 27 19:34:40 crc restorecon[4747]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 19:34:41 crc restorecon[4747]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 27 19:34:42 crc kubenswrapper[4941]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 19:34:42 crc kubenswrapper[4941]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 27 19:34:42 crc kubenswrapper[4941]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 19:34:42 crc kubenswrapper[4941]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 19:34:42 crc kubenswrapper[4941]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 27 19:34:42 crc kubenswrapper[4941]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.233670 4941 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239240 4941 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239263 4941 feature_gate.go:330] unrecognized feature gate: Example Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239270 4941 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239278 4941 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239285 4941 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239291 4941 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239297 4941 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239302 4941 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239308 4941 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239314 4941 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239320 4941 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239334 4941 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239340 4941 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239355 4941 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239361 4941 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239366 4941 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239371 4941 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239377 4941 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239382 4941 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239389 4941 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239395 4941 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239401 4941 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239406 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239411 4941 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239422 4941 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239427 4941 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239432 4941 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239437 4941 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239443 4941 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239449 4941 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239454 4941 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239459 4941 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239464 4941 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239493 4941 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239499 4941 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239504 4941 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239509 4941 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239514 4941 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239519 4941 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239524 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239529 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239537 4941 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239544 4941 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239550 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239556 4941 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239561 4941 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239568 4941 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239573 4941 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239578 4941 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239584 4941 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239589 4941 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239594 4941 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239599 4941 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239606 4941 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239613 4941 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239619 4941 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239626 4941 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239631 4941 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239637 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239642 4941 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239647 4941 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239652 4941 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239657 4941 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239662 4941 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239667 4941 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239672 4941 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239677 4941 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239683 4941 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239688 4941 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239693 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.239698 4941 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239793 4941 flags.go:64] FLAG: --address="0.0.0.0" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239805 4941 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239816 4941 flags.go:64] FLAG: --anonymous-auth="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239823 4941 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239831 4941 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239837 4941 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239847 4941 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239856 4941 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239862 4941 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239869 4941 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239876 4941 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239882 4941 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239889 4941 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239895 4941 flags.go:64] FLAG: --cgroup-root="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239901 4941 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239908 4941 flags.go:64] FLAG: --client-ca-file="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239914 4941 flags.go:64] FLAG: --cloud-config="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239920 4941 flags.go:64] FLAG: --cloud-provider="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239925 4941 flags.go:64] FLAG: --cluster-dns="[]" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239932 4941 flags.go:64] FLAG: --cluster-domain="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239938 4941 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239944 4941 flags.go:64] FLAG: --config-dir="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239950 4941 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239956 4941 flags.go:64] FLAG: --container-log-max-files="5" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239964 4941 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239970 4941 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239976 4941 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239982 4941 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239988 4941 flags.go:64] FLAG: --contention-profiling="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.239994 4941 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240000 4941 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240006 4941 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240012 4941 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240019 4941 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240026 4941 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240031 4941 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240037 4941 flags.go:64] FLAG: --enable-load-reader="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240044 4941 flags.go:64] FLAG: --enable-server="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240050 4941 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240059 4941 flags.go:64] FLAG: --event-burst="100" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240065 4941 flags.go:64] FLAG: --event-qps="50" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240072 4941 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240078 4941 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240084 4941 flags.go:64] FLAG: --eviction-hard="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240091 4941 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240097 4941 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240103 4941 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240110 4941 flags.go:64] FLAG: --eviction-soft="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240115 4941 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240121 4941 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240128 4941 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240134 4941 flags.go:64] FLAG: --experimental-mounter-path="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240140 4941 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240146 4941 flags.go:64] FLAG: --fail-swap-on="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240152 4941 flags.go:64] FLAG: --feature-gates="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240159 4941 flags.go:64] FLAG: --file-check-frequency="20s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240165 4941 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240172 4941 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240178 4941 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240184 4941 flags.go:64] FLAG: --healthz-port="10248" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240190 4941 flags.go:64] FLAG: --help="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240196 4941 flags.go:64] FLAG: --hostname-override="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240202 4941 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240208 4941 flags.go:64] FLAG: --http-check-frequency="20s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240214 4941 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240220 4941 flags.go:64] FLAG: --image-credential-provider-config="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240226 4941 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240232 4941 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240238 4941 flags.go:64] FLAG: --image-service-endpoint="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240245 4941 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240251 4941 flags.go:64] FLAG: --kube-api-burst="100" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240257 4941 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240265 4941 flags.go:64] FLAG: --kube-api-qps="50" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240271 4941 flags.go:64] FLAG: --kube-reserved="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240278 4941 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240284 4941 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240291 4941 flags.go:64] FLAG: --kubelet-cgroups="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240297 4941 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240304 4941 flags.go:64] FLAG: --lock-file="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240309 4941 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240316 4941 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240322 4941 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240330 4941 flags.go:64] FLAG: --log-json-split-stream="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240337 4941 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240343 4941 flags.go:64] FLAG: --log-text-split-stream="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240349 4941 flags.go:64] FLAG: --logging-format="text" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240356 4941 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240362 4941 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240368 4941 flags.go:64] FLAG: --manifest-url="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240374 4941 flags.go:64] FLAG: --manifest-url-header="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240382 4941 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240388 4941 flags.go:64] FLAG: --max-open-files="1000000" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240395 4941 flags.go:64] FLAG: --max-pods="110" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240402 4941 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240408 4941 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240414 4941 flags.go:64] FLAG: --memory-manager-policy="None" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240420 4941 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240426 4941 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240432 4941 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240439 4941 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240452 4941 flags.go:64] FLAG: --node-status-max-images="50" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240458 4941 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240464 4941 flags.go:64] FLAG: --oom-score-adj="-999" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240493 4941 flags.go:64] FLAG: --pod-cidr="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240500 4941 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240508 4941 flags.go:64] FLAG: --pod-manifest-path="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240514 4941 flags.go:64] FLAG: --pod-max-pids="-1" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240520 4941 flags.go:64] FLAG: --pods-per-core="0" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240526 4941 flags.go:64] FLAG: --port="10250" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240532 4941 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240538 4941 flags.go:64] FLAG: --provider-id="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240544 4941 flags.go:64] FLAG: --qos-reserved="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240550 4941 flags.go:64] FLAG: --read-only-port="10255" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240557 4941 flags.go:64] FLAG: --register-node="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240563 4941 flags.go:64] FLAG: --register-schedulable="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240568 4941 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240578 4941 flags.go:64] FLAG: --registry-burst="10" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240584 4941 flags.go:64] FLAG: --registry-qps="5" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240590 4941 flags.go:64] FLAG: --reserved-cpus="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240596 4941 flags.go:64] FLAG: --reserved-memory="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240604 4941 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240610 4941 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240617 4941 flags.go:64] FLAG: --rotate-certificates="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240623 4941 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240629 4941 flags.go:64] FLAG: --runonce="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240635 4941 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240641 4941 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240647 4941 flags.go:64] FLAG: --seccomp-default="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240653 4941 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240659 4941 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240665 4941 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240671 4941 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240677 4941 flags.go:64] FLAG: --storage-driver-password="root" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240683 4941 flags.go:64] FLAG: --storage-driver-secure="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240690 4941 flags.go:64] FLAG: --storage-driver-table="stats" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240696 4941 flags.go:64] FLAG: --storage-driver-user="root" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240702 4941 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240708 4941 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240714 4941 flags.go:64] FLAG: --system-cgroups="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240719 4941 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240729 4941 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240735 4941 flags.go:64] FLAG: --tls-cert-file="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240740 4941 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240748 4941 flags.go:64] FLAG: --tls-min-version="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240753 4941 flags.go:64] FLAG: --tls-private-key-file="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240759 4941 flags.go:64] FLAG: --topology-manager-policy="none" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240765 4941 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240771 4941 flags.go:64] FLAG: --topology-manager-scope="container" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240781 4941 flags.go:64] FLAG: --v="2" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240789 4941 flags.go:64] FLAG: --version="false" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240796 4941 flags.go:64] FLAG: --vmodule="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240803 4941 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.240810 4941 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.240952 4941 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.240960 4941 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.240966 4941 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.240973 4941 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.240979 4941 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.240986 4941 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.240992 4941 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.240998 4941 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241004 4941 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241009 4941 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241014 4941 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241020 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241025 4941 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241035 4941 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241040 4941 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241045 4941 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241051 4941 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241056 4941 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241062 4941 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241067 4941 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241072 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241077 4941 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241083 4941 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241088 4941 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241093 4941 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241099 4941 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241104 4941 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241111 4941 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241118 4941 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241156 4941 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241164 4941 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241171 4941 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241177 4941 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241183 4941 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241190 4941 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241197 4941 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241203 4941 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241208 4941 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241214 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241220 4941 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241226 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241232 4941 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241238 4941 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241243 4941 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241249 4941 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241257 4941 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241263 4941 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241269 4941 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241274 4941 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241281 4941 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241287 4941 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241292 4941 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241298 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241303 4941 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241309 4941 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241315 4941 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241320 4941 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241326 4941 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241333 4941 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241347 4941 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241355 4941 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241361 4941 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241368 4941 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241376 4941 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241382 4941 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241388 4941 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241394 4941 feature_gate.go:330] unrecognized feature gate: Example Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241400 4941 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241405 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241411 4941 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.241417 4941 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.242172 4941 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.251907 4941 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.251944 4941 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252069 4941 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252083 4941 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252091 4941 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252100 4941 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252109 4941 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252117 4941 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252125 4941 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252133 4941 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252141 4941 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252149 4941 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252157 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252166 4941 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252173 4941 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252181 4941 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252189 4941 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252197 4941 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252208 4941 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252219 4941 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252228 4941 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252237 4941 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252245 4941 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252253 4941 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252262 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252270 4941 feature_gate.go:330] unrecognized feature gate: Example Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252278 4941 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252288 4941 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252318 4941 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252327 4941 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252334 4941 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252345 4941 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252355 4941 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252365 4941 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252373 4941 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252381 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252391 4941 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252401 4941 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252408 4941 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252417 4941 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252425 4941 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252433 4941 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252441 4941 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252449 4941 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252457 4941 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252465 4941 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252505 4941 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252515 4941 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252526 4941 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252534 4941 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252542 4941 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252550 4941 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252557 4941 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252566 4941 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252575 4941 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252583 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252590 4941 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252598 4941 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252607 4941 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252614 4941 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252622 4941 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252630 4941 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252640 4941 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252651 4941 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252659 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252668 4941 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252676 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252684 4941 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252693 4941 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252700 4941 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252708 4941 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252716 4941 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252724 4941 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.252737 4941 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252972 4941 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252987 4941 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.252996 4941 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253006 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253014 4941 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253024 4941 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253033 4941 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253040 4941 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253048 4941 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253056 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253064 4941 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253072 4941 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253080 4941 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253087 4941 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253096 4941 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253104 4941 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253112 4941 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253119 4941 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253127 4941 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253135 4941 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253143 4941 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253187 4941 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253198 4941 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253209 4941 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253220 4941 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253230 4941 feature_gate.go:330] unrecognized feature gate: Example Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253238 4941 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253247 4941 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253255 4941 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253263 4941 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253271 4941 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253279 4941 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253288 4941 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253296 4941 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253305 4941 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253312 4941 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253321 4941 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253329 4941 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253336 4941 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253344 4941 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253352 4941 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253360 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253368 4941 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253376 4941 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253384 4941 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253391 4941 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253399 4941 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253407 4941 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253415 4941 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253423 4941 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253431 4941 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253439 4941 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253447 4941 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253455 4941 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253462 4941 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253504 4941 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253518 4941 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253528 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253536 4941 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253544 4941 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253552 4941 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253559 4941 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253568 4941 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253578 4941 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253589 4941 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253598 4941 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253607 4941 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253615 4941 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253625 4941 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253635 4941 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.253644 4941 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.253657 4941 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.253890 4941 server.go:940] "Client rotation is on, will bootstrap in background" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.258582 4941 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.263243 4941 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.263374 4941 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.265508 4941 server.go:997] "Starting client certificate rotation" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.265540 4941 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.265758 4941 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.291556 4941 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.294783 4941 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.295418 4941 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.319324 4941 log.go:25] "Validated CRI v1 runtime API" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.353570 4941 log.go:25] "Validated CRI v1 image API" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.355769 4941 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.363439 4941 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-27-19-29-47-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.363467 4941 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.380761 4941 manager.go:217] Machine: {Timestamp:2026-02-27 19:34:42.378919761 +0000 UTC m=+0.640060221 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad BootID:8033ec25-48ed-4948-8194-eb2027952881 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e4:f8:dd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e4:f8:dd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ae:62:fc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:52:11:ad Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c5:b4:db Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:62:67:07 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:2e:32:f6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:da:40:08:0b:4e:10 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:1a:4a:0d:37:32 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.381040 4941 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.381309 4941 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.383288 4941 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.383528 4941 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.383570 4941 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.383829 4941 topology_manager.go:138] "Creating topology manager with none policy" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.383844 4941 container_manager_linux.go:303] "Creating device plugin manager" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.384224 4941 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.384264 4941 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.384428 4941 state_mem.go:36] "Initialized new in-memory state store" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.384544 4941 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.390533 4941 kubelet.go:418] "Attempting to sync node with API server" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.390559 4941 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.390587 4941 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.390603 4941 kubelet.go:324] "Adding apiserver pod source" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.390617 4941 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.395241 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.395307 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.395408 4941 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.395523 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.395617 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.396378 4941 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.397903 4941 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399258 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399279 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399286 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399292 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399302 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399309 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399315 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399326 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399334 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399340 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399349 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.399356 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.401624 4941 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.401979 4941 server.go:1280] "Started kubelet" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.402119 4941 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.402224 4941 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.402820 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.403310 4941 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 27 19:34:42 crc systemd[1]: Started Kubernetes Kubelet. Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.405849 4941 server.go:460] "Adding debug handlers to kubelet server" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.406640 4941 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.406675 4941 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.407617 4941 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.407632 4941 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.407742 4941 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.409446 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.409446 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.409532 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.413953 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="200ms" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.414536 4941 factory.go:55] Registering systemd factory Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.415864 4941 factory.go:221] Registration of the systemd container factory successfully Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.409142 4941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.75:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18983180bf191683 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,LastTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.417740 4941 factory.go:153] Registering CRI-O factory Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.417765 4941 factory.go:221] Registration of the crio container factory successfully Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.417827 4941 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.417849 4941 factory.go:103] Registering Raw factory Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.417866 4941 manager.go:1196] Started watching for new ooms in manager Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.418353 4941 manager.go:319] Starting recovery of all containers Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423085 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423134 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423149 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423162 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423176 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423188 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423202 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423215 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423231 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423245 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423257 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423268 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423280 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423294 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423306 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423324 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423336 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423347 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423359 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423370 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423428 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423441 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423452 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423465 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423525 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423537 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423602 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423616 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423629 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423640 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423652 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423666 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423678 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423690 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423702 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423714 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423753 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423765 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423776 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423788 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423799 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423810 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423822 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423833 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423845 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423859 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423871 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423884 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423898 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423912 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423928 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423944 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423969 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.423988 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424006 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424025 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424041 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424127 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424148 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424168 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424184 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424225 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424242 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424262 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424279 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424294 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424305 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424318 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424330 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424342 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424355 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424367 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424381 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424395 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424407 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424419 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424431 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424444 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424458 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424509 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424524 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424538 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424551 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424563 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424576 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424588 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424600 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424611 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424626 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424638 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424650 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424662 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424679 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424695 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424712 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424768 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424783 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424796 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424814 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424828 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424842 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424856 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424871 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424884 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424907 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424925 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424942 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.424988 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425007 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425067 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425086 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425105 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425122 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425137 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425154 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425170 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425186 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425202 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425218 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425234 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425248 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425264 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425278 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425294 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425308 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425330 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425348 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425364 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425381 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425395 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425412 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425427 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425441 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425456 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425500 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425518 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425534 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425552 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425567 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425618 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425637 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425652 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425666 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425682 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425828 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425853 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.425869 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428346 4941 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428382 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428402 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428420 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428440 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428458 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428501 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428521 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428538 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428559 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428577 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428594 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428615 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428631 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428648 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428667 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428683 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428699 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428716 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428735 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428751 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428768 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428785 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428802 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428819 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428835 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428867 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428885 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428904 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428920 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428937 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.428955 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429017 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429066 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429085 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429100 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429115 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429129 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429143 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429157 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429171 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429184 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429198 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429211 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429225 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429237 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429285 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429302 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429316 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429331 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429343 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429355 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429367 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429767 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429783 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429796 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429808 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429825 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429836 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429849 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429861 4941 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429873 4941 reconstruct.go:97] "Volume reconstruction finished" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.429881 4941 reconciler.go:26] "Reconciler: start to sync state" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.440931 4941 manager.go:324] Recovery completed Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.461592 4941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.463748 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.465607 4941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.465713 4941 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.465746 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.465779 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.465784 4941 kubelet.go:2335] "Starting kubelet main sync loop" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.465792 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.465869 4941 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.466616 4941 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.466650 4941 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.466678 4941 state_mem.go:36] "Initialized new in-memory state store" Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.468131 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.468239 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.480050 4941 policy_none.go:49] "None policy: Start" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.480996 4941 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.481042 4941 state_mem.go:35] "Initializing new in-memory state store" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.509675 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.544612 4941 manager.go:334] "Starting Device Plugin manager" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.544838 4941 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.544857 4941 server.go:79] "Starting device plugin registration server" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.545191 4941 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.545222 4941 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.545410 4941 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.545514 4941 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.545522 4941 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.551961 4941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.566153 4941 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.566218 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.567523 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.567559 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.567569 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.567699 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.567882 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.567911 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.568733 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.568778 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.568787 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.569323 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.569348 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.569357 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.569463 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.569800 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.569843 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570536 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570558 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570565 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570711 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570730 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570739 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570844 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570968 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.570994 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.571698 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.571706 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.571760 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.571776 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.571764 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.571817 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.572001 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.572084 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.572103 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.572674 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.572714 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.572723 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.573262 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.573286 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.573296 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.573459 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.573574 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.574183 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.574205 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.574214 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.614810 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="400ms" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.632647 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.632686 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.632713 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.632736 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.632772 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.632890 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.632949 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.632996 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.633034 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.633059 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.633098 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.633120 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.633137 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.633154 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.633185 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.646304 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.647577 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.647612 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.647623 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.647648 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.648018 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.75:6443: connect: connection refused" node="crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734704 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734769 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734801 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734830 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734859 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734891 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734918 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734926 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734945 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.734977 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735022 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735038 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735016 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735117 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735082 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735057 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735151 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735201 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735210 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735204 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735277 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735245 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735420 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735525 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735581 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735682 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735695 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735776 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735811 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.735880 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.848544 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.849901 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.849976 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.849999 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.850047 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:34:42 crc kubenswrapper[4941]: E0227 19:34:42.850713 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.75:6443: connect: connection refused" node="crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.898858 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.904357 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.911247 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.936812 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: I0227 19:34:42.941060 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.950418 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ff73532cc6780ac94b158e19be7b0737da0e3d3b34e9dbeb0b8cffa7e01680fb WatchSource:0}: Error finding container ff73532cc6780ac94b158e19be7b0737da0e3d3b34e9dbeb0b8cffa7e01680fb: Status 404 returned error can't find the container with id ff73532cc6780ac94b158e19be7b0737da0e3d3b34e9dbeb0b8cffa7e01680fb Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.953142 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-062cdde0ea4692dbd32e94e389b3d3c8268710e6c8c530c8827a7805496a448f WatchSource:0}: Error finding container 062cdde0ea4692dbd32e94e389b3d3c8268710e6c8c530c8827a7805496a448f: Status 404 returned error can't find the container with id 062cdde0ea4692dbd32e94e389b3d3c8268710e6c8c530c8827a7805496a448f Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.954013 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-15a2e75815475f003e038020472148537b0e3d2bd9414aac67c10a4cfcc8f5ff WatchSource:0}: Error finding container 15a2e75815475f003e038020472148537b0e3d2bd9414aac67c10a4cfcc8f5ff: Status 404 returned error can't find the container with id 15a2e75815475f003e038020472148537b0e3d2bd9414aac67c10a4cfcc8f5ff Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.961026 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ec5164a0a27c63218ee0d1dd28c3738c15979227b3caae7b84ecf6a8db9be3b5 WatchSource:0}: Error finding container ec5164a0a27c63218ee0d1dd28c3738c15979227b3caae7b84ecf6a8db9be3b5: Status 404 returned error can't find the container with id ec5164a0a27c63218ee0d1dd28c3738c15979227b3caae7b84ecf6a8db9be3b5 Feb 27 19:34:42 crc kubenswrapper[4941]: W0227 19:34:42.963528 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-828332076cb316a09156a84c29eebbecbb0dc841c3de73771bb3adafe4658224 WatchSource:0}: Error finding container 828332076cb316a09156a84c29eebbecbb0dc841c3de73771bb3adafe4658224: Status 404 returned error can't find the container with id 828332076cb316a09156a84c29eebbecbb0dc841c3de73771bb3adafe4658224 Feb 27 19:34:43 crc kubenswrapper[4941]: E0227 19:34:43.015593 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="800ms" Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.251650 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.253370 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.253453 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.253493 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.253543 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:34:43 crc kubenswrapper[4941]: E0227 19:34:43.254198 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.75:6443: connect: connection refused" node="crc" Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.403617 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.469987 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"828332076cb316a09156a84c29eebbecbb0dc841c3de73771bb3adafe4658224"} Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.470942 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec5164a0a27c63218ee0d1dd28c3738c15979227b3caae7b84ecf6a8db9be3b5"} Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.471829 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"15a2e75815475f003e038020472148537b0e3d2bd9414aac67c10a4cfcc8f5ff"} Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.472820 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"062cdde0ea4692dbd32e94e389b3d3c8268710e6c8c530c8827a7805496a448f"} Feb 27 19:34:43 crc kubenswrapper[4941]: I0227 19:34:43.473516 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ff73532cc6780ac94b158e19be7b0737da0e3d3b34e9dbeb0b8cffa7e01680fb"} Feb 27 19:34:43 crc kubenswrapper[4941]: W0227 19:34:43.486848 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:43 crc kubenswrapper[4941]: E0227 19:34:43.486951 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:43 crc kubenswrapper[4941]: E0227 19:34:43.816567 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="1.6s" Feb 27 19:34:43 crc kubenswrapper[4941]: W0227 19:34:43.854932 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:43 crc kubenswrapper[4941]: E0227 19:34:43.855027 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:43 crc kubenswrapper[4941]: W0227 19:34:43.855749 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:43 crc kubenswrapper[4941]: E0227 19:34:43.855798 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:43 crc kubenswrapper[4941]: W0227 19:34:43.911297 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:43 crc kubenswrapper[4941]: E0227 19:34:43.911550 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.055239 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.056578 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.056641 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.056657 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.056694 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:34:44 crc kubenswrapper[4941]: E0227 19:34:44.057284 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.75:6443: connect: connection refused" node="crc" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.369895 4941 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 19:34:44 crc kubenswrapper[4941]: E0227 19:34:44.370940 4941 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.404521 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.479249 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114" exitCode=0 Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.479354 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114"} Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.479415 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.481500 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.481545 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.481558 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.486026 4941 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e" exitCode=0 Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.486196 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e"} Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.486365 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.488032 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.488543 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.488595 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.488612 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.489518 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.489564 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.489575 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.491440 4941 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6" exitCode=0 Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.491771 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.491929 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6"} Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.495241 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.495292 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.495306 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.501584 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa6997db0064de34e76d7a6c9a6befd891e9997c9a91ab3eb7435ce3e20b5f58"} Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.501677 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c40079c7faa02f3536954f56d3acb4a7232369b9cf5a066e81408eed72e4ae47"} Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.501704 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3"} Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.501719 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc"} Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.502499 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.503489 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.503537 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.503551 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.504207 4941 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46" exitCode=0 Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.504263 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46"} Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.504421 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.505851 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.505884 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.505912 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.610795 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:44 crc kubenswrapper[4941]: E0227 19:34:44.685865 4941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.75:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18983180bf191683 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,LastTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:34:44 crc kubenswrapper[4941]: I0227 19:34:44.693266 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:45 crc kubenswrapper[4941]: W0227 19:34:45.392116 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:45 crc kubenswrapper[4941]: E0227 19:34:45.392274 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.406267 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:45 crc kubenswrapper[4941]: E0227 19:34:45.417837 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="3.2s" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.509155 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aa830b245caf4e84d125dcb5b1d4abda047cd1b00bfc9010f45d57ac28cbb86e"} Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.509202 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5b0f85b9cea9d0041d1782c27844e121dcc9b493d0007b246826df38298f43fc"} Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.511135 4941 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7" exitCode=0 Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.511188 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7"} Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.511282 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.512458 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.512514 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.512524 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.513680 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242"} Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.513706 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf"} Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.516209 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.516234 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49"} Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.516261 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.517307 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.517328 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.517360 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.517339 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.517395 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.517372 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.658070 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.658977 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.659008 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.659019 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:45 crc kubenswrapper[4941]: I0227 19:34:45.659042 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:34:45 crc kubenswrapper[4941]: E0227 19:34:45.659425 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.75:6443: connect: connection refused" node="crc" Feb 27 19:34:46 crc kubenswrapper[4941]: W0227 19:34:46.099973 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:46 crc kubenswrapper[4941]: E0227 19:34:46.100060 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:46 crc kubenswrapper[4941]: W0227 19:34:46.134237 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:46 crc kubenswrapper[4941]: E0227 19:34:46.134330 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.75:6443: connect: connection refused" logger="UnhandledError" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.404117 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.521043 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a77558cd6a5ef63a2b2d6db5d9bcd243f504fe8a27412832477698d908e9d617"} Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.521100 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.522193 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.522256 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.522276 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.523559 4941 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141" exitCode=0 Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.523636 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141"} Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.523677 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.524522 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.524565 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.524576 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.526154 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.528175 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6581dec623e63b5b9b3ce92cf011cc1199cb471103fcd782536fe0de39c759a1" exitCode=255 Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.528229 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6581dec623e63b5b9b3ce92cf011cc1199cb471103fcd782536fe0de39c759a1"} Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.528270 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.528281 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87"} Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.528304 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93"} Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.528332 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.528275 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529432 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529455 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529524 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529533 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529561 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529572 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529663 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529726 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529740 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:46 crc kubenswrapper[4941]: I0227 19:34:46.529923 4941 scope.go:117] "RemoveContainer" containerID="6581dec623e63b5b9b3ce92cf011cc1199cb471103fcd782536fe0de39c759a1" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.162533 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.170322 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.359845 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.465082 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.538931 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de"} Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.538975 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8"} Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.538989 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e"} Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.538999 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88"} Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.542133 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.544864 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.545798 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb"} Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.546070 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.546124 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.548338 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.548505 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.549679 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.549717 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.549747 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.549751 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.549775 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.549758 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.549907 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.550093 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.550104 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.694166 4941 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 19:34:47 crc kubenswrapper[4941]: I0227 19:34:47.694376 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.552299 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da"} Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.552827 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.552344 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.552344 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.552448 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.552417 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.554762 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.554800 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.554811 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555139 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555211 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555252 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555270 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555384 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555490 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555405 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555648 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.555675 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.558441 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.739337 4941 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.860295 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.861857 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.861890 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.861901 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:48 crc kubenswrapper[4941]: I0227 19:34:48.861925 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.090774 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.554538 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.554585 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.554538 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.555620 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.555679 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.555711 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.555849 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.555920 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.555943 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.556424 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.556556 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:49 crc kubenswrapper[4941]: I0227 19:34:49.556643 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:50 crc kubenswrapper[4941]: I0227 19:34:50.557659 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:50 crc kubenswrapper[4941]: I0227 19:34:50.559002 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:50 crc kubenswrapper[4941]: I0227 19:34:50.559083 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:50 crc kubenswrapper[4941]: I0227 19:34:50.559116 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:52 crc kubenswrapper[4941]: I0227 19:34:52.208453 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 27 19:34:52 crc kubenswrapper[4941]: I0227 19:34:52.209263 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:52 crc kubenswrapper[4941]: I0227 19:34:52.210338 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:52 crc kubenswrapper[4941]: I0227 19:34:52.210369 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:52 crc kubenswrapper[4941]: I0227 19:34:52.210380 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:52 crc kubenswrapper[4941]: E0227 19:34:52.552030 4941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 19:34:54 crc kubenswrapper[4941]: I0227 19:34:54.615921 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:34:54 crc kubenswrapper[4941]: I0227 19:34:54.616078 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:54 crc kubenswrapper[4941]: I0227 19:34:54.617099 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:54 crc kubenswrapper[4941]: I0227 19:34:54.617149 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:54 crc kubenswrapper[4941]: I0227 19:34:54.617161 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:56 crc kubenswrapper[4941]: W0227 19:34:56.570843 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 27 19:34:56 crc kubenswrapper[4941]: I0227 19:34:56.570955 4941 trace.go:236] Trace[430719660]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Feb-2026 19:34:46.570) (total time: 10000ms): Feb 27 19:34:56 crc kubenswrapper[4941]: Trace[430719660]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:34:56.570) Feb 27 19:34:56 crc kubenswrapper[4941]: Trace[430719660]: [10.000893708s] [10.000893708s] END Feb 27 19:34:56 crc kubenswrapper[4941]: E0227 19:34:56.570985 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.166805 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z Feb 27 19:34:57 crc kubenswrapper[4941]: E0227 19:34:57.167408 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 27 19:34:57 crc kubenswrapper[4941]: E0227 19:34:57.169703 4941 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.170633 4941 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.170683 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 19:34:57 crc kubenswrapper[4941]: W0227 19:34:57.171505 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z Feb 27 19:34:57 crc kubenswrapper[4941]: E0227 19:34:57.171686 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:34:57 crc kubenswrapper[4941]: W0227 19:34:57.173420 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z Feb 27 19:34:57 crc kubenswrapper[4941]: E0227 19:34:57.173560 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:34:57 crc kubenswrapper[4941]: E0227 19:34:57.174243 4941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18983180bf191683 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,LastTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:34:57 crc kubenswrapper[4941]: W0227 19:34:57.174724 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z Feb 27 19:34:57 crc kubenswrapper[4941]: E0227 19:34:57.174808 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:34:57 crc kubenswrapper[4941]: E0227 19:34:57.180717 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.182306 4941 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.182377 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.406983 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:57Z is after 2026-02-23T05:33:13Z Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.471117 4941 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]log ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]etcd ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/priority-and-fairness-filter ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-apiextensions-informers ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-apiextensions-controllers ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/crd-informer-synced ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-system-namespaces-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 27 19:34:57 crc kubenswrapper[4941]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 27 19:34:57 crc kubenswrapper[4941]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/bootstrap-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/start-kube-aggregator-informers ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/apiservice-registration-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/apiservice-discovery-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]autoregister-completion ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/apiservice-openapi-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 27 19:34:57 crc kubenswrapper[4941]: livez check failed Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.471171 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.576538 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.577914 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.580087 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb" exitCode=255 Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.580164 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb"} Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.580236 4941 scope.go:117] "RemoveContainer" containerID="6581dec623e63b5b9b3ce92cf011cc1199cb471103fcd782536fe0de39c759a1" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.580507 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.581729 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.581779 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.581797 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.582595 4941 scope.go:117] "RemoveContainer" containerID="e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb" Feb 27 19:34:57 crc kubenswrapper[4941]: E0227 19:34:57.582896 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.694439 4941 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 19:34:57 crc kubenswrapper[4941]: I0227 19:34:57.694538 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.405899 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:58Z is after 2026-02-23T05:33:13Z Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.584182 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.594097 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.594382 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.595723 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.595802 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.595829 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.611057 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.686665 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.686896 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.688629 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.688693 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.688709 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:34:58 crc kubenswrapper[4941]: I0227 19:34:58.689680 4941 scope.go:117] "RemoveContainer" containerID="e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb" Feb 27 19:34:58 crc kubenswrapper[4941]: E0227 19:34:58.690090 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:34:59 crc kubenswrapper[4941]: I0227 19:34:59.408268 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:34:59Z is after 2026-02-23T05:33:13Z Feb 27 19:34:59 crc kubenswrapper[4941]: I0227 19:34:59.588943 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:34:59 crc kubenswrapper[4941]: I0227 19:34:59.589867 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:34:59 crc kubenswrapper[4941]: I0227 19:34:59.589909 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:34:59 crc kubenswrapper[4941]: I0227 19:34:59.589921 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:00 crc kubenswrapper[4941]: I0227 19:35:00.406958 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:00Z is after 2026-02-23T05:33:13Z Feb 27 19:35:01 crc kubenswrapper[4941]: I0227 19:35:01.408686 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:01Z is after 2026-02-23T05:33:13Z Feb 27 19:35:01 crc kubenswrapper[4941]: W0227 19:35:01.466178 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:01Z is after 2026-02-23T05:33:13Z Feb 27 19:35:01 crc kubenswrapper[4941]: E0227 19:35:01.466263 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.407571 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:02Z is after 2026-02-23T05:33:13Z Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.473081 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.473338 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.474677 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.474717 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.474741 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.475425 4941 scope.go:117] "RemoveContainer" containerID="e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb" Feb 27 19:35:02 crc kubenswrapper[4941]: E0227 19:35:02.475650 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.478216 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:35:02 crc kubenswrapper[4941]: E0227 19:35:02.552170 4941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.597269 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.598403 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.598434 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.598446 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:02 crc kubenswrapper[4941]: I0227 19:35:02.600323 4941 scope.go:117] "RemoveContainer" containerID="e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb" Feb 27 19:35:02 crc kubenswrapper[4941]: E0227 19:35:02.600720 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:35:03 crc kubenswrapper[4941]: I0227 19:35:03.407017 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:03Z is after 2026-02-23T05:33:13Z Feb 27 19:35:03 crc kubenswrapper[4941]: E0227 19:35:03.571975 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:03Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 19:35:03 crc kubenswrapper[4941]: I0227 19:35:03.581048 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:03 crc kubenswrapper[4941]: I0227 19:35:03.582917 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:03 crc kubenswrapper[4941]: I0227 19:35:03.582982 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:03 crc kubenswrapper[4941]: I0227 19:35:03.582998 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:03 crc kubenswrapper[4941]: I0227 19:35:03.583035 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:35:03 crc kubenswrapper[4941]: E0227 19:35:03.586640 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:03Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 19:35:04 crc kubenswrapper[4941]: I0227 19:35:04.406825 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:04Z is after 2026-02-23T05:33:13Z Feb 27 19:35:05 crc kubenswrapper[4941]: W0227 19:35:05.229167 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:05Z is after 2026-02-23T05:33:13Z Feb 27 19:35:05 crc kubenswrapper[4941]: E0227 19:35:05.229231 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:35:05 crc kubenswrapper[4941]: I0227 19:35:05.407278 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:05Z is after 2026-02-23T05:33:13Z Feb 27 19:35:05 crc kubenswrapper[4941]: I0227 19:35:05.912384 4941 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 19:35:05 crc kubenswrapper[4941]: E0227 19:35:05.915750 4941 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:35:06 crc kubenswrapper[4941]: W0227 19:35:06.138414 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z Feb 27 19:35:06 crc kubenswrapper[4941]: E0227 19:35:06.138574 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:35:06 crc kubenswrapper[4941]: I0227 19:35:06.408527 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z Feb 27 19:35:06 crc kubenswrapper[4941]: W0227 19:35:06.743526 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z Feb 27 19:35:06 crc kubenswrapper[4941]: E0227 19:35:06.743634 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:35:07 crc kubenswrapper[4941]: E0227 19:35:07.179327 4941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:07Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18983180bf191683 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,LastTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.408350 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:07Z is after 2026-02-23T05:33:13Z Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.693820 4941 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.693907 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.693972 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.694118 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.695181 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.695212 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.695221 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.695638 4941 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 19:35:07 crc kubenswrapper[4941]: I0227 19:35:07.695778 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3" gracePeriod=30 Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.409284 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:08Z is after 2026-02-23T05:33:13Z Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.615959 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.616430 4941 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3" exitCode=255 Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.616495 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3"} Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.616529 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a0d9155642dc0db690c9c10b9d8ecc975c41bc299bf100d1bd8214622193292f"} Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.616629 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.617556 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.617590 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:08 crc kubenswrapper[4941]: I0227 19:35:08.617601 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:09 crc kubenswrapper[4941]: I0227 19:35:09.091813 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:35:09 crc kubenswrapper[4941]: I0227 19:35:09.408394 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:09Z is after 2026-02-23T05:33:13Z Feb 27 19:35:09 crc kubenswrapper[4941]: W0227 19:35:09.448438 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:09Z is after 2026-02-23T05:33:13Z Feb 27 19:35:09 crc kubenswrapper[4941]: E0227 19:35:09.448582 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 19:35:09 crc kubenswrapper[4941]: I0227 19:35:09.619385 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:09 crc kubenswrapper[4941]: I0227 19:35:09.620451 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:09 crc kubenswrapper[4941]: I0227 19:35:09.620572 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:09 crc kubenswrapper[4941]: I0227 19:35:09.620599 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:10 crc kubenswrapper[4941]: I0227 19:35:10.407033 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:10Z is after 2026-02-23T05:33:13Z Feb 27 19:35:10 crc kubenswrapper[4941]: E0227 19:35:10.576335 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:10Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 19:35:10 crc kubenswrapper[4941]: I0227 19:35:10.587289 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:10 crc kubenswrapper[4941]: I0227 19:35:10.588678 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:10 crc kubenswrapper[4941]: I0227 19:35:10.588722 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:10 crc kubenswrapper[4941]: I0227 19:35:10.588737 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:10 crc kubenswrapper[4941]: I0227 19:35:10.588770 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:35:10 crc kubenswrapper[4941]: E0227 19:35:10.592139 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:10Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 19:35:11 crc kubenswrapper[4941]: I0227 19:35:11.409044 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:11Z is after 2026-02-23T05:33:13Z Feb 27 19:35:12 crc kubenswrapper[4941]: I0227 19:35:12.406930 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:12Z is after 2026-02-23T05:33:13Z Feb 27 19:35:12 crc kubenswrapper[4941]: E0227 19:35:12.552363 4941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 19:35:13 crc kubenswrapper[4941]: I0227 19:35:13.407421 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:13Z is after 2026-02-23T05:33:13Z Feb 27 19:35:14 crc kubenswrapper[4941]: I0227 19:35:14.405624 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:14Z is after 2026-02-23T05:33:13Z Feb 27 19:35:14 crc kubenswrapper[4941]: I0227 19:35:14.694166 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:35:14 crc kubenswrapper[4941]: I0227 19:35:14.694500 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:14 crc kubenswrapper[4941]: I0227 19:35:14.696291 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:14 crc kubenswrapper[4941]: I0227 19:35:14.696342 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:14 crc kubenswrapper[4941]: I0227 19:35:14.696351 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:15 crc kubenswrapper[4941]: I0227 19:35:15.405235 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:16 crc kubenswrapper[4941]: I0227 19:35:16.408795 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.184104 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180bf191683 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,LastTimestamp:2026-02-27 19:34:42.401957507 +0000 UTC m=+0.663097917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.188870 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e6c769 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,LastTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.195829 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e70e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,LastTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.201231 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e7908c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465820812 +0000 UTC m=+0.726961242,LastTimestamp:2026-02-27 19:34:42.465820812 +0000 UTC m=+0.726961242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.207691 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c7c0e2fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.54717209 +0000 UTC m=+0.808312510,LastTimestamp:2026-02-27 19:34:42.54717209 +0000 UTC m=+0.808312510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.213950 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e6c769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e6c769 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,LastTimestamp:2026-02-27 19:34:42.567548478 +0000 UTC m=+0.828688898,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.216907 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e70e63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e70e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,LastTimestamp:2026-02-27 19:34:42.567566519 +0000 UTC m=+0.828706939,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.218802 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e7908c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e7908c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465820812 +0000 UTC m=+0.726961242,LastTimestamp:2026-02-27 19:34:42.567575379 +0000 UTC m=+0.828715799,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.221557 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e6c769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e6c769 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,LastTimestamp:2026-02-27 19:34:42.568764738 +0000 UTC m=+0.829905158,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.224139 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e70e63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e70e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,LastTimestamp:2026-02-27 19:34:42.568783769 +0000 UTC m=+0.829924189,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.227108 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e7908c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e7908c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465820812 +0000 UTC m=+0.726961242,LastTimestamp:2026-02-27 19:34:42.568791949 +0000 UTC m=+0.829932369,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.228924 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e6c769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e6c769 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,LastTimestamp:2026-02-27 19:34:42.569338648 +0000 UTC m=+0.830479068,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.230874 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e70e63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e70e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,LastTimestamp:2026-02-27 19:34:42.569354548 +0000 UTC m=+0.830494968,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.233575 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e7908c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e7908c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465820812 +0000 UTC m=+0.726961242,LastTimestamp:2026-02-27 19:34:42.569361718 +0000 UTC m=+0.830502138,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.235793 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e6c769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e6c769 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,LastTimestamp:2026-02-27 19:34:42.570549988 +0000 UTC m=+0.831690408,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.236980 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e70e63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e70e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,LastTimestamp:2026-02-27 19:34:42.570562878 +0000 UTC m=+0.831703298,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.239782 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e7908c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e7908c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465820812 +0000 UTC m=+0.726961242,LastTimestamp:2026-02-27 19:34:42.570569259 +0000 UTC m=+0.831709669,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.246270 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e6c769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e6c769 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,LastTimestamp:2026-02-27 19:34:42.570723154 +0000 UTC m=+0.831863574,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.249827 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e70e63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e70e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,LastTimestamp:2026-02-27 19:34:42.570734824 +0000 UTC m=+0.831875244,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.253441 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e7908c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e7908c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465820812 +0000 UTC m=+0.726961242,LastTimestamp:2026-02-27 19:34:42.570744044 +0000 UTC m=+0.831884464,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.259170 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e6c769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e6c769 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,LastTimestamp:2026-02-27 19:34:42.571739347 +0000 UTC m=+0.832879767,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.263589 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e6c769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e6c769 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465769321 +0000 UTC m=+0.726909751,LastTimestamp:2026-02-27 19:34:42.571756378 +0000 UTC m=+0.832896798,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.269934 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e70e63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e70e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,LastTimestamp:2026-02-27 19:34:42.571770248 +0000 UTC m=+0.832910658,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.275386 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e7908c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e7908c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465820812 +0000 UTC m=+0.726961242,LastTimestamp:2026-02-27 19:34:42.571783589 +0000 UTC m=+0.832924009,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.281414 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18983180c2e70e63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18983180c2e70e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.465787491 +0000 UTC m=+0.726927921,LastTimestamp:2026-02-27 19:34:42.57180925 +0000 UTC m=+0.832949670,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.287255 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983180e02b7fe6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.956812262 +0000 UTC m=+1.217952722,LastTimestamp:2026-02-27 19:34:42.956812262 +0000 UTC m=+1.217952722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.294016 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18983180e033903a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.95734073 +0000 UTC m=+1.218481150,LastTimestamp:2026-02-27 19:34:42.95734073 +0000 UTC m=+1.218481150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.299624 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18983180e051e77a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.959329146 +0000 UTC m=+1.220469556,LastTimestamp:2026-02-27 19:34:42.959329146 +0000 UTC m=+1.220469556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.304071 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983180e0ad8468 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.965333096 +0000 UTC m=+1.226473516,LastTimestamp:2026-02-27 19:34:42.965333096 +0000 UTC m=+1.226473516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.308255 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18983180e112b5a5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:42.971964837 +0000 UTC m=+1.233105287,LastTimestamp:2026-02-27 19:34:42.971964837 +0000 UTC m=+1.233105287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.313421 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1898318102522a1e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.52977155 +0000 UTC m=+1.790911970,LastTimestamp:2026-02-27 19:34:43.52977155 +0000 UTC m=+1.790911970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.317079 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189831810283f647 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.533035079 +0000 UTC m=+1.794175499,LastTimestamp:2026-02-27 19:34:43.533035079 +0000 UTC m=+1.794175499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.318504 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189831810286ab95 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.533212565 +0000 UTC m=+1.794352995,LastTimestamp:2026-02-27 19:34:43.533212565 +0000 UTC m=+1.794352995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.321106 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898318102930414 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.534021652 +0000 UTC m=+1.795162072,LastTimestamp:2026-02-27 19:34:43.534021652 +0000 UTC m=+1.795162072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.324779 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318102b54b36 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.536268086 +0000 UTC m=+1.797408516,LastTimestamp:2026-02-27 19:34:43.536268086 +0000 UTC m=+1.797408516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.328309 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18983181031ed075 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.543183477 +0000 UTC m=+1.804323897,LastTimestamp:2026-02-27 19:34:43.543183477 +0000 UTC m=+1.804323897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.332366 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18983181032bd075 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.544035445 +0000 UTC m=+1.805175865,LastTimestamp:2026-02-27 19:34:43.544035445 +0000 UTC m=+1.805175865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.336997 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181032e1268 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.5441834 +0000 UTC m=+1.805323820,LastTimestamp:2026-02-27 19:34:43.5441834 +0000 UTC m=+1.805323820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.340769 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181034c1853 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.546150995 +0000 UTC m=+1.807291415,LastTimestamp:2026-02-27 19:34:43.546150995 +0000 UTC m=+1.807291415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.345159 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831810378b70d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.549075213 +0000 UTC m=+1.810215633,LastTimestamp:2026-02-27 19:34:43.549075213 +0000 UTC m=+1.810215633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.348654 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898318103b4ea80 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.553020544 +0000 UTC m=+1.814160964,LastTimestamp:2026-02-27 19:34:43.553020544 +0000 UTC m=+1.814160964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.352020 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898318114e80ae7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.841583847 +0000 UTC m=+2.102724277,LastTimestamp:2026-02-27 19:34:43.841583847 +0000 UTC m=+2.102724277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.355209 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898318115aa6627 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.854321191 +0000 UTC m=+2.115461621,LastTimestamp:2026-02-27 19:34:43.854321191 +0000 UTC m=+2.115461621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.358684 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898318115ca5ea5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.856416421 +0000 UTC m=+2.117556861,LastTimestamp:2026-02-27 19:34:43.856416421 +0000 UTC m=+2.117556861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.363451 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181224cbf91 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.066287505 +0000 UTC m=+2.327427925,LastTimestamp:2026-02-27 19:34:44.066287505 +0000 UTC m=+2.327427925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.367317 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898318122f680ad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.077412525 +0000 UTC m=+2.338552945,LastTimestamp:2026-02-27 19:34:44.077412525 +0000 UTC m=+2.338552945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.370506 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181230b28b4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.07876626 +0000 UTC m=+2.339906680,LastTimestamp:2026-02-27 19:34:44.07876626 +0000 UTC m=+2.339906680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.374933 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189831812d4e6f03 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.250947331 +0000 UTC m=+2.512087761,LastTimestamp:2026-02-27 19:34:44.250947331 +0000 UTC m=+2.512087761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.381923 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189831812e58ff90 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.268416912 +0000 UTC m=+2.529557332,LastTimestamp:2026-02-27 19:34:44.268416912 +0000 UTC m=+2.529557332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.385579 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831813b6ae2d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.487693012 +0000 UTC m=+2.748833432,LastTimestamp:2026-02-27 19:34:44.487693012 +0000 UTC m=+2.748833432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.389718 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189831813b92815e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.490289502 +0000 UTC m=+2.751429922,LastTimestamp:2026-02-27 19:34:44.490289502 +0000 UTC m=+2.751429922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.393964 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189831813c34eed0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.500934352 +0000 UTC m=+2.762074762,LastTimestamp:2026-02-27 19:34:44.500934352 +0000 UTC m=+2.762074762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.401100 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189831813ca799b3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.508449203 +0000 UTC m=+2.769589623,LastTimestamp:2026-02-27 19:34:44.508449203 +0000 UTC m=+2.769589623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.405648 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898318148560ce7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.704431335 +0000 UTC m=+2.965571755,LastTimestamp:2026-02-27 19:34:44.704431335 +0000 UTC m=+2.965571755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.409281 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.409362 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318148a83cb2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.709817522 +0000 UTC m=+2.970957942,LastTimestamp:2026-02-27 19:34:44.709817522 +0000 UTC m=+2.970957942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.410913 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1898318148b136dd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.710405853 +0000 UTC m=+2.971546273,LastTimestamp:2026-02-27 19:34:44.710405853 +0000 UTC m=+2.971546273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.412946 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189831814907fcb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.7160926 +0000 UTC m=+2.977233020,LastTimestamp:2026-02-27 19:34:44.7160926 +0000 UTC m=+2.977233020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.415742 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189831814958cf0c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.721389324 +0000 UTC m=+2.982529744,LastTimestamp:2026-02-27 19:34:44.721389324 +0000 UTC m=+2.982529744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.418922 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18983181496846ea openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.72240305 +0000 UTC m=+2.983543470,LastTimestamp:2026-02-27 19:34:44.72240305 +0000 UTC m=+2.983543470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.421784 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318149b1dd2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.727225647 +0000 UTC m=+2.988366067,LastTimestamp:2026-02-27 19:34:44.727225647 +0000 UTC m=+2.988366067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.429611 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1898318149c88897 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.728711319 +0000 UTC m=+2.989851739,LastTimestamp:2026-02-27 19:34:44.728711319 +0000 UTC m=+2.989851739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.433323 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318149e7a4bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.73075014 +0000 UTC m=+2.991890560,LastTimestamp:2026-02-27 19:34:44.73075014 +0000 UTC m=+2.991890560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.437326 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189831814a4ec34e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.737508174 +0000 UTC m=+2.998648594,LastTimestamp:2026-02-27 19:34:44.737508174 +0000 UTC m=+2.998648594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.440721 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831815490d34e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:44.909609806 +0000 UTC m=+3.170750226,LastTimestamp:2026-02-27 19:34:44.909609806 +0000 UTC m=+3.170750226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.443993 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898318170d8b329 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.384082217 +0000 UTC m=+3.645222627,LastTimestamp:2026-02-27 19:34:45.384082217 +0000 UTC m=+3.645222627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.447182 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831817119a64d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.388338765 +0000 UTC m=+3.649479185,LastTimestamp:2026-02-27 19:34:45.388338765 +0000 UTC m=+3.649479185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.450300 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18983181712db5f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.38965349 +0000 UTC m=+3.650793910,LastTimestamp:2026-02-27 19:34:45.38965349 +0000 UTC m=+3.650793910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.453649 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18983181724502ad openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.407957677 +0000 UTC m=+3.669098097,LastTimestamp:2026-02-27 19:34:45.407957677 +0000 UTC m=+3.669098097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.457226 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898318172583f24 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.40921834 +0000 UTC m=+3.670358790,LastTimestamp:2026-02-27 19:34:45.40921834 +0000 UTC m=+3.670358790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.461671 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189831817892f57b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.513729403 +0000 UTC m=+3.774869823,LastTimestamp:2026-02-27 19:34:45.513729403 +0000 UTC m=+3.774869823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.466119 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.466097 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831817b5b2749 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.560403785 +0000 UTC m=+3.821544205,LastTimestamp:2026-02-27 19:34:45.560403785 +0000 UTC m=+3.821544205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.467376 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.467402 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.467411 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.467856 4941 scope.go:117] "RemoveContainer" containerID="e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.491846 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189831817b64faf5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.561047797 +0000 UTC m=+3.822188217,LastTimestamp:2026-02-27 19:34:45.561047797 +0000 UTC m=+3.822188217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.495955 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831817c66c5ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.577942445 +0000 UTC m=+3.839082865,LastTimestamp:2026-02-27 19:34:45.577942445 +0000 UTC m=+3.839082865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.500449 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189831817c6b54dd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.578241245 +0000 UTC m=+3.839381665,LastTimestamp:2026-02-27 19:34:45.578241245 +0000 UTC m=+3.839381665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.504295 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831817c777617 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.579036183 +0000 UTC m=+3.840176603,LastTimestamp:2026-02-27 19:34:45.579036183 +0000 UTC m=+3.840176603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.509167 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898318182c1cb32 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.68457093 +0000 UTC m=+3.945711350,LastTimestamp:2026-02-27 19:34:45.68457093 +0000 UTC m=+3.945711350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.514366 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189831818391ac8d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.698194573 +0000 UTC m=+3.959334993,LastTimestamp:2026-02-27 19:34:45.698194573 +0000 UTC m=+3.959334993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.518187 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18983181867ff4f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.747365112 +0000 UTC m=+4.008505532,LastTimestamp:2026-02-27 19:34:45.747365112 +0000 UTC m=+4.008505532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.521831 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318187128876 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.756971126 +0000 UTC m=+4.018111556,LastTimestamp:2026-02-27 19:34:45.756971126 +0000 UTC m=+4.018111556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.526043 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318187294112 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.758460178 +0000 UTC m=+4.019600598,LastTimestamp:2026-02-27 19:34:45.758460178 +0000 UTC m=+4.019600598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.529941 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831819199a143 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.933596995 +0000 UTC m=+4.194737415,LastTimestamp:2026-02-27 19:34:45.933596995 +0000 UTC m=+4.194737415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.533614 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318192530b3f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.945748287 +0000 UTC m=+4.206888707,LastTimestamp:2026-02-27 19:34:45.945748287 +0000 UTC m=+4.206888707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.537576 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181b4e7e670 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:46.525929072 +0000 UTC m=+4.787069512,LastTimestamp:2026-02-27 19:34:46.525929072 +0000 UTC m=+4.787069512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.542512 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1898318187294112\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318187294112 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.758460178 +0000 UTC m=+4.019600598,LastTimestamp:2026-02-27 19:34:46.53222306 +0000 UTC m=+4.793363480,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.546450 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181bf2501ea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:46.697705962 +0000 UTC m=+4.958846382,LastTimestamp:2026-02-27 19:34:46.697705962 +0000 UTC m=+4.958846382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.550407 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189831819199a143\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831819199a143 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.933596995 +0000 UTC m=+4.194737415,LastTimestamp:2026-02-27 19:34:46.698337774 +0000 UTC m=+4.959478194,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.554281 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1898318192530b3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898318192530b3f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:45.945748287 +0000 UTC m=+4.206888707,LastTimestamp:2026-02-27 19:34:46.709548474 +0000 UTC m=+4.970688894,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.558433 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181c00178a2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:46.712154274 +0000 UTC m=+4.973294714,LastTimestamp:2026-02-27 19:34:46.712154274 +0000 UTC m=+4.973294714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.562778 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181c00ee961 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:46.713035105 +0000 UTC m=+4.974175525,LastTimestamp:2026-02-27 19:34:46.713035105 +0000 UTC m=+4.974175525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.566966 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181cb01aff1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:46.896717809 +0000 UTC m=+5.157858229,LastTimestamp:2026-02-27 19:34:46.896717809 +0000 UTC m=+5.157858229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.570676 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181cbf1a575 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:46.912443765 +0000 UTC m=+5.173584185,LastTimestamp:2026-02-27 19:34:46.912443765 +0000 UTC m=+5.173584185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.575548 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181cc0033c3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:46.913397699 +0000 UTC m=+5.174538119,LastTimestamp:2026-02-27 19:34:46.913397699 +0000 UTC m=+5.174538119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.582606 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181d5cb54ff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.077704959 +0000 UTC m=+5.338845379,LastTimestamp:2026-02-27 19:34:47.077704959 +0000 UTC m=+5.338845379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.582789 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.586248 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181d64971ac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.085969836 +0000 UTC m=+5.347110266,LastTimestamp:2026-02-27 19:34:47.085969836 +0000 UTC m=+5.347110266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.589692 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181d65a1e24 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.087062564 +0000 UTC m=+5.348202984,LastTimestamp:2026-02-27 19:34:47.087062564 +0000 UTC m=+5.348202984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.592530 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.593683 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.593722 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.593735 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.593760 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.594181 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181e4ac07c7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.327311815 +0000 UTC m=+5.588452245,LastTimestamp:2026-02-27 19:34:47.327311815 +0000 UTC m=+5.588452245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.597752 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181e5854dc9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.341551049 +0000 UTC m=+5.602691469,LastTimestamp:2026-02-27 19:34:47.341551049 +0000 UTC m=+5.602691469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.598178 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.599605 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181e59dcb6b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.343156075 +0000 UTC m=+5.604296495,LastTimestamp:2026-02-27 19:34:47.343156075 +0000 UTC m=+5.604296495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.603368 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181f26e317f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.558140287 +0000 UTC m=+5.819280727,LastTimestamp:2026-02-27 19:34:47.558140287 +0000 UTC m=+5.819280727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.607039 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18983181f3594111 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.573545233 +0000 UTC m=+5.834685663,LastTimestamp:2026-02-27 19:34:47.573545233 +0000 UTC m=+5.834685663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.610304 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 19:35:17 crc kubenswrapper[4941]: &Event{ObjectMeta:{kube-controller-manager-crc.18983181fa8c7f37 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 19:35:17 crc kubenswrapper[4941]: body: Feb 27 19:35:17 crc kubenswrapper[4941]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.694343991 +0000 UTC m=+5.955484451,LastTimestamp:2026-02-27 19:34:47.694343991 +0000 UTC m=+5.955484451,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 19:35:17 crc kubenswrapper[4941]: > Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.613817 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181fa8daa9a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.694420634 +0000 UTC m=+5.955561064,LastTimestamp:2026-02-27 19:34:47.694420634 +0000 UTC m=+5.955561064,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.620303 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 19:35:17 crc kubenswrapper[4941]: &Event{ObjectMeta:{kube-apiserver-crc.189831842f61beb4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 19:35:17 crc kubenswrapper[4941]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 19:35:17 crc kubenswrapper[4941]: Feb 27 19:35:17 crc kubenswrapper[4941]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:57.170669236 +0000 UTC m=+15.431809666,LastTimestamp:2026-02-27 19:34:57.170669236 +0000 UTC m=+15.431809666,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 19:35:17 crc kubenswrapper[4941]: > Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.624738 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831842f624b2e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:57.170705198 +0000 UTC m=+15.431845628,LastTimestamp:2026-02-27 19:34:57.170705198 +0000 UTC m=+15.431845628,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.628535 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189831842f61beb4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 19:35:17 crc kubenswrapper[4941]: &Event{ObjectMeta:{kube-apiserver-crc.189831842f61beb4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 19:35:17 crc kubenswrapper[4941]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 19:35:17 crc kubenswrapper[4941]: Feb 27 19:35:17 crc kubenswrapper[4941]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:57.170669236 +0000 UTC m=+15.431809666,LastTimestamp:2026-02-27 19:34:57.182361403 +0000 UTC m=+15.443501833,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 19:35:17 crc kubenswrapper[4941]: > Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.632106 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189831842f624b2e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189831842f624b2e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:57.170705198 +0000 UTC m=+15.431845628,LastTimestamp:2026-02-27 19:34:57.182398704 +0000 UTC m=+15.443539134,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.636029 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18983181fa8c7f37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 19:35:17 crc kubenswrapper[4941]: &Event{ObjectMeta:{kube-controller-manager-crc.18983181fa8c7f37 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 19:35:17 crc kubenswrapper[4941]: body: Feb 27 19:35:17 crc kubenswrapper[4941]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.694343991 +0000 UTC m=+5.955484451,LastTimestamp:2026-02-27 19:34:57.694518133 +0000 UTC m=+15.955658583,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 19:35:17 crc kubenswrapper[4941]: > Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.639604 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18983181fa8daa9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181fa8daa9a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.694420634 +0000 UTC m=+5.955561064,LastTimestamp:2026-02-27 19:34:57.694574055 +0000 UTC m=+15.955714505,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.644353 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18983181fa8c7f37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 19:35:17 crc kubenswrapper[4941]: &Event{ObjectMeta:{kube-controller-manager-crc.18983181fa8c7f37 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 19:35:17 crc kubenswrapper[4941]: body: Feb 27 19:35:17 crc kubenswrapper[4941]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.694343991 +0000 UTC m=+5.955484451,LastTimestamp:2026-02-27 19:35:07.693888048 +0000 UTC m=+25.955028468,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 19:35:17 crc kubenswrapper[4941]: > Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.648802 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18983181fa8daa9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181fa8daa9a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.694420634 +0000 UTC m=+5.955561064,LastTimestamp:2026-02-27 19:35:07.693939479 +0000 UTC m=+25.955079899,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.652755 4941 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983186a2b9f33c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:35:07.695764284 +0000 UTC m=+25.956904704,LastTimestamp:2026-02-27 19:35:07.695764284 +0000 UTC m=+25.956904704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.656284 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18983181034c1853\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181034c1853 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.546150995 +0000 UTC m=+1.807291415,LastTimestamp:2026-02-27 19:35:07.808743844 +0000 UTC m=+26.069884264,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.659983 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898318114e80ae7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898318114e80ae7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.841583847 +0000 UTC m=+2.102724277,LastTimestamp:2026-02-27 19:35:07.975892764 +0000 UTC m=+26.237033184,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.663803 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898318115aa6627\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898318115aa6627 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:43.854321191 +0000 UTC m=+2.115461621,LastTimestamp:2026-02-27 19:35:07.985195314 +0000 UTC m=+26.246335734,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.694515 4941 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 19:35:17 crc kubenswrapper[4941]: I0227 19:35:17.694598 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.699644 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18983181fa8c7f37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 19:35:17 crc kubenswrapper[4941]: &Event{ObjectMeta:{kube-controller-manager-crc.18983181fa8c7f37 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 19:35:17 crc kubenswrapper[4941]: body: Feb 27 19:35:17 crc kubenswrapper[4941]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.694343991 +0000 UTC m=+5.955484451,LastTimestamp:2026-02-27 19:35:17.694581377 +0000 UTC m=+35.955721797,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 19:35:17 crc kubenswrapper[4941]: > Feb 27 19:35:17 crc kubenswrapper[4941]: E0227 19:35:17.704055 4941 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18983181fa8daa9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18983181fa8daa9a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:34:47.694420634 +0000 UTC m=+5.955561064,LastTimestamp:2026-02-27 19:35:17.694623659 +0000 UTC m=+35.955764079,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.408544 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.642595 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.643064 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.644596 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dba6c740af9f8c644d35dd30af2e078e6d5d1f45ce4bebe050e4ec69dcd2c4f5" exitCode=255 Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.644634 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dba6c740af9f8c644d35dd30af2e078e6d5d1f45ce4bebe050e4ec69dcd2c4f5"} Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.644670 4941 scope.go:117] "RemoveContainer" containerID="e84d0cdc2c1a5c87bbee30a6de324fe9ba9b2db5ac812697943bab41e75e6ffb" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.644839 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.645711 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.645733 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.645742 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.646181 4941 scope.go:117] "RemoveContainer" containerID="dba6c740af9f8c644d35dd30af2e078e6d5d1f45ce4bebe050e4ec69dcd2c4f5" Feb 27 19:35:18 crc kubenswrapper[4941]: E0227 19:35:18.646322 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:35:18 crc kubenswrapper[4941]: I0227 19:35:18.686524 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:35:19 crc kubenswrapper[4941]: I0227 19:35:19.410936 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:19 crc kubenswrapper[4941]: I0227 19:35:19.649004 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 19:35:19 crc kubenswrapper[4941]: I0227 19:35:19.651028 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:19 crc kubenswrapper[4941]: I0227 19:35:19.651806 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:19 crc kubenswrapper[4941]: I0227 19:35:19.651843 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:19 crc kubenswrapper[4941]: I0227 19:35:19.651861 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:19 crc kubenswrapper[4941]: I0227 19:35:19.652452 4941 scope.go:117] "RemoveContainer" containerID="dba6c740af9f8c644d35dd30af2e078e6d5d1f45ce4bebe050e4ec69dcd2c4f5" Feb 27 19:35:19 crc kubenswrapper[4941]: E0227 19:35:19.652703 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:35:20 crc kubenswrapper[4941]: W0227 19:35:20.321771 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 27 19:35:20 crc kubenswrapper[4941]: E0227 19:35:20.321846 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 19:35:20 crc kubenswrapper[4941]: I0227 19:35:20.406825 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:21 crc kubenswrapper[4941]: I0227 19:35:21.408978 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:21 crc kubenswrapper[4941]: I0227 19:35:21.925675 4941 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 19:35:21 crc kubenswrapper[4941]: I0227 19:35:21.937171 4941 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 19:35:22 crc kubenswrapper[4941]: W0227 19:35:22.078825 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 27 19:35:22 crc kubenswrapper[4941]: E0227 19:35:22.078881 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 19:35:22 crc kubenswrapper[4941]: I0227 19:35:22.410161 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:22 crc kubenswrapper[4941]: E0227 19:35:22.552852 4941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 19:35:23 crc kubenswrapper[4941]: I0227 19:35:23.411524 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.408811 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:24 crc kubenswrapper[4941]: E0227 19:35:24.588647 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.598508 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.599956 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.600028 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.600054 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.600101 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:35:24 crc kubenswrapper[4941]: E0227 19:35:24.605076 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.751338 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.751595 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.752822 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.752863 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.752873 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:24 crc kubenswrapper[4941]: I0227 19:35:24.756122 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.050716 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.050901 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.051925 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.051972 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.051989 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.052807 4941 scope.go:117] "RemoveContainer" containerID="dba6c740af9f8c644d35dd30af2e078e6d5d1f45ce4bebe050e4ec69dcd2c4f5" Feb 27 19:35:25 crc kubenswrapper[4941]: E0227 19:35:25.053065 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.406803 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.665364 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.666291 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.666328 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:25 crc kubenswrapper[4941]: I0227 19:35:25.666337 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:26 crc kubenswrapper[4941]: W0227 19:35:26.339211 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:26 crc kubenswrapper[4941]: E0227 19:35:26.339264 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 19:35:26 crc kubenswrapper[4941]: I0227 19:35:26.408558 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:27 crc kubenswrapper[4941]: I0227 19:35:27.407191 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:28 crc kubenswrapper[4941]: I0227 19:35:28.408537 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:29 crc kubenswrapper[4941]: I0227 19:35:29.410084 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:30 crc kubenswrapper[4941]: I0227 19:35:30.407058 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:31 crc kubenswrapper[4941]: I0227 19:35:31.408538 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:31 crc kubenswrapper[4941]: E0227 19:35:31.593350 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 19:35:31 crc kubenswrapper[4941]: I0227 19:35:31.605539 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:31 crc kubenswrapper[4941]: I0227 19:35:31.606691 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:31 crc kubenswrapper[4941]: I0227 19:35:31.606728 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:31 crc kubenswrapper[4941]: I0227 19:35:31.606737 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:31 crc kubenswrapper[4941]: I0227 19:35:31.606761 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:35:31 crc kubenswrapper[4941]: E0227 19:35:31.610562 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 19:35:32 crc kubenswrapper[4941]: W0227 19:35:32.091715 4941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 27 19:35:32 crc kubenswrapper[4941]: E0227 19:35:32.091778 4941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 19:35:32 crc kubenswrapper[4941]: I0227 19:35:32.408578 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:32 crc kubenswrapper[4941]: E0227 19:35:32.553687 4941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 19:35:33 crc kubenswrapper[4941]: I0227 19:35:33.185102 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 19:35:33 crc kubenswrapper[4941]: I0227 19:35:33.185241 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:33 crc kubenswrapper[4941]: I0227 19:35:33.186338 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:33 crc kubenswrapper[4941]: I0227 19:35:33.186381 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:33 crc kubenswrapper[4941]: I0227 19:35:33.186394 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:33 crc kubenswrapper[4941]: I0227 19:35:33.407370 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:34 crc kubenswrapper[4941]: I0227 19:35:34.408318 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:35 crc kubenswrapper[4941]: I0227 19:35:35.410811 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:36 crc kubenswrapper[4941]: I0227 19:35:36.408666 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:37 crc kubenswrapper[4941]: I0227 19:35:37.410765 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:38 crc kubenswrapper[4941]: I0227 19:35:38.411102 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:38 crc kubenswrapper[4941]: E0227 19:35:38.601771 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 19:35:38 crc kubenswrapper[4941]: I0227 19:35:38.611135 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:38 crc kubenswrapper[4941]: I0227 19:35:38.612857 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:38 crc kubenswrapper[4941]: I0227 19:35:38.612907 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:38 crc kubenswrapper[4941]: I0227 19:35:38.612924 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:38 crc kubenswrapper[4941]: I0227 19:35:38.612956 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:35:38 crc kubenswrapper[4941]: E0227 19:35:38.619841 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.407338 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.466820 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.468574 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.468699 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.468789 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.469439 4941 scope.go:117] "RemoveContainer" containerID="dba6c740af9f8c644d35dd30af2e078e6d5d1f45ce4bebe050e4ec69dcd2c4f5" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.698886 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.700231 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e"} Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.700336 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.701197 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.701222 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:39 crc kubenswrapper[4941]: I0227 19:35:39.701235 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.407260 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.703912 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.704668 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.706608 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" exitCode=255 Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.706679 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e"} Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.706791 4941 scope.go:117] "RemoveContainer" containerID="dba6c740af9f8c644d35dd30af2e078e6d5d1f45ce4bebe050e4ec69dcd2c4f5" Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.706954 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.708137 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.708162 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.708170 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:40 crc kubenswrapper[4941]: I0227 19:35:40.708625 4941 scope.go:117] "RemoveContainer" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" Feb 27 19:35:40 crc kubenswrapper[4941]: E0227 19:35:40.708808 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:35:41 crc kubenswrapper[4941]: I0227 19:35:41.407213 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:41 crc kubenswrapper[4941]: I0227 19:35:41.709785 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 19:35:42 crc kubenswrapper[4941]: I0227 19:35:42.407841 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:42 crc kubenswrapper[4941]: E0227 19:35:42.554824 4941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 19:35:43 crc kubenswrapper[4941]: I0227 19:35:43.407876 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:44 crc kubenswrapper[4941]: I0227 19:35:44.410998 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.617372 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.617568 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.619626 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.619680 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.619702 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.620615 4941 scope.go:117] "RemoveContainer" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.620745 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:45 crc kubenswrapper[4941]: E0227 19:35:45.620901 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:35:45 crc kubenswrapper[4941]: E0227 19:35:45.623134 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.623707 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.623752 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.623765 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.623789 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:35:45 crc kubenswrapper[4941]: I0227 19:35:45.624379 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:45 crc kubenswrapper[4941]: E0227 19:35:45.627731 4941 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 19:35:46 crc kubenswrapper[4941]: I0227 19:35:46.408183 4941 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 19:35:46 crc kubenswrapper[4941]: I0227 19:35:46.818453 4941 csr.go:261] certificate signing request csr-c697d is approved, waiting to be issued Feb 27 19:35:46 crc kubenswrapper[4941]: I0227 19:35:46.827862 4941 csr.go:257] certificate signing request csr-c697d is issued Feb 27 19:35:46 crc kubenswrapper[4941]: I0227 19:35:46.911236 4941 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 27 19:35:47 crc kubenswrapper[4941]: I0227 19:35:47.266059 4941 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 27 19:35:47 crc kubenswrapper[4941]: I0227 19:35:47.829218 4941 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-16 13:06:38.708341862 +0000 UTC Feb 27 19:35:47 crc kubenswrapper[4941]: I0227 19:35:47.829261 4941 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6281h30m50.879083122s for next certificate rotation Feb 27 19:35:48 crc kubenswrapper[4941]: I0227 19:35:48.685946 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:35:48 crc kubenswrapper[4941]: I0227 19:35:48.686160 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:48 crc kubenswrapper[4941]: I0227 19:35:48.687540 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:48 crc kubenswrapper[4941]: I0227 19:35:48.687565 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:48 crc kubenswrapper[4941]: I0227 19:35:48.687578 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:48 crc kubenswrapper[4941]: I0227 19:35:48.688161 4941 scope.go:117] "RemoveContainer" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" Feb 27 19:35:48 crc kubenswrapper[4941]: E0227 19:35:48.688333 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.556286 4941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.628054 4941 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.628990 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.629028 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.629036 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.629130 4941 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.637407 4941 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.637717 4941 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.637745 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.640882 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.640918 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.640929 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.640946 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.640958 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:52Z","lastTransitionTime":"2026-02-27T19:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.654795 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.662046 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.662086 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.662097 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.662113 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.662123 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:52Z","lastTransitionTime":"2026-02-27T19:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.672776 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.678917 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.678944 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.678953 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.678966 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.678977 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:52Z","lastTransitionTime":"2026-02-27T19:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.687079 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.692816 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.692841 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.692849 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.692861 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:52 crc kubenswrapper[4941]: I0227 19:35:52.692870 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:52Z","lastTransitionTime":"2026-02-27T19:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.701194 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.701299 4941 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.701318 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.802161 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:52 crc kubenswrapper[4941]: E0227 19:35:52.902490 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.002804 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.103789 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.204681 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.305060 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.405622 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.506403 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.607240 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.707701 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.808159 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:53 crc kubenswrapper[4941]: E0227 19:35:53.908447 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.009299 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.110204 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.211374 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.311485 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.412275 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.513409 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.614391 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.714811 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.815708 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:54 crc kubenswrapper[4941]: E0227 19:35:54.915895 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.016980 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.117364 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.218086 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.318814 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.419838 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.520523 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.620888 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.720998 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.822019 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:55 crc kubenswrapper[4941]: E0227 19:35:55.922140 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.023232 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.124447 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.225621 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.326237 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.427247 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.528012 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.628402 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.729560 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.830386 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:56 crc kubenswrapper[4941]: E0227 19:35:56.930748 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.031440 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.132371 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.232495 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.333180 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.433755 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.534369 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.634555 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.735595 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.836585 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:57 crc kubenswrapper[4941]: E0227 19:35:57.937151 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.037402 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.137566 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.238430 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.339521 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: I0227 19:35:58.364988 4941 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.440207 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.540683 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: I0227 19:35:58.581089 4941 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.641913 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.742589 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.843024 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:58 crc kubenswrapper[4941]: E0227 19:35:58.944122 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.045171 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.146373 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.246657 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.347179 4941 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.404321 4941 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.431907 4941 apiserver.go:52] "Watching apiserver" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.437975 4941 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.438767 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-multus/multus-additional-cni-plugins-wbhlr","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-machine-config-operator/machine-config-daemon-hj7qr","openshift-multus/multus-lt4bk","openshift-multus/network-metrics-daemon-mvmp7","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-ovn-kubernetes/ovnkube-node-v74b7","openshift-dns/node-resolver-8pmzp","openshift-image-registry/node-ca-xr6t6","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx"] Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.439201 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.439384 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.439487 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.439498 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.439544 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.439677 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.439706 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.439818 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.439894 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.439930 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.439991 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.440393 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.440918 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.441023 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8pmzp" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.441021 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.441114 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.441175 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.441220 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.443868 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.443975 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.444031 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.444070 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.444102 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.444154 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.444180 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.444499 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.444818 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.445454 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.445702 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.444187 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.446589 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.446733 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.447160 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.448210 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.449394 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.453815 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.453979 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.453999 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454099 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454116 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454143 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454226 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454284 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454300 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454380 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454526 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454581 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454658 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454711 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454796 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.454816 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455019 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455065 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455075 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455158 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455689 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455714 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455723 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455738 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.455747 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:59Z","lastTransitionTime":"2026-02-27T19:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.467881 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.478175 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.488258 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.496937 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.504625 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.508745 4941 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.512839 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.521400 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.530753 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.538256 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.546331 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.566826 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.566855 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.566865 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.566878 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.566887 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:59Z","lastTransitionTime":"2026-02-27T19:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.574732 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.585189 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.585456 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.585562 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.585629 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.585699 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.585981 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586068 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.585624 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586172 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586234 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586140 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586312 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586333 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586352 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586369 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586384 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586380 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586403 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586462 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586511 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586524 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586536 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586581 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586609 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586637 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586662 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586665 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586685 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586707 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586730 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586751 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586759 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586795 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586848 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586870 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586900 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586922 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586920 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586944 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586963 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.586985 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587005 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587025 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587043 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587061 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587079 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587097 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587115 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587132 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587151 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587171 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587192 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587217 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587237 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587257 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587279 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587300 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587319 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587339 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587357 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587377 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587397 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587420 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587440 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587459 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587488 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587498 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587500 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587521 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587541 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587564 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587586 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587605 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587626 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587647 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587680 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587692 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587715 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587737 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587758 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587777 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587797 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587820 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587841 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587863 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587883 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587903 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587926 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587949 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587968 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587987 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588006 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588044 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588068 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588100 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588124 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588145 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588166 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588186 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588205 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588247 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588271 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588294 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588318 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588337 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588359 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588379 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588400 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588423 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588445 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588464 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588500 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588549 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588576 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588601 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588625 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588647 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588671 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588694 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588720 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588743 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588766 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588788 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588810 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588833 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588855 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588876 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588895 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588916 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588938 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588962 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588986 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589010 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589030 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589106 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589126 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589142 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589158 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589175 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589191 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589209 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589224 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589242 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589259 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589275 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589292 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589308 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589324 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589340 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589356 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589373 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589397 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589421 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589445 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589584 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589604 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589620 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589636 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589652 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589678 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589710 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589735 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589758 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589779 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589800 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589823 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589866 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589893 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589918 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589941 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589960 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589976 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589993 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590012 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590036 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590055 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590072 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590090 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590108 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590125 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590141 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590158 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590176 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590195 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590211 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590237 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590264 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590287 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590312 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590337 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590357 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590375 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590401 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590427 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590452 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590496 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590521 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590560 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590585 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590614 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590641 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590666 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590689 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590714 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590737 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590761 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590784 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590861 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-cni-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590879 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-cnibin\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590894 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-conf-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590912 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-systemd-units\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590928 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/364afc31-5c38-471a-b645-c6d4388a3dc5-serviceca\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590943 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590961 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-os-release\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590979 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c0b99f5-8424-4e74-a332-f6dff828c48a-mcd-auth-proxy-config\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590998 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fd71801-391f-4811-992b-e7ca21d72fbd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591015 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c0b99f5-8424-4e74-a332-f6dff828c48a-proxy-tls\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591039 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591057 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4pb9\" (UniqueName: \"kubernetes.io/projected/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-kube-api-access-s4pb9\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591074 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wcb\" (UniqueName: \"kubernetes.io/projected/364afc31-5c38-471a-b645-c6d4388a3dc5-kube-api-access-r8wcb\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591089 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-multus-certs\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591108 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/47bb43f7-4b26-4fea-9b06-b485aaff253f-hosts-file\") pod \"node-resolver-8pmzp\" (UID: \"47bb43f7-4b26-4fea-9b06-b485aaff253f\") " pod="openshift-dns/node-resolver-8pmzp" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591124 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591139 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/16d71936-7f0d-4add-a17b-400840d5fce2-multus-daemon-config\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591154 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fd71801-391f-4811-992b-e7ca21d72fbd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591175 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591195 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591218 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-system-cni-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591242 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-netns\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591271 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591298 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591323 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591347 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-slash\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591373 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591399 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2jq6\" (UniqueName: \"kubernetes.io/projected/52622e46-d1a9-4b02-8ed1-8130f184b10c-kube-api-access-z2jq6\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591422 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xkwj\" (UniqueName: \"kubernetes.io/projected/1c0b99f5-8424-4e74-a332-f6dff828c48a-kube-api-access-5xkwj\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591443 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-socket-dir-parent\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591465 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-log-socket\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591496 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-config\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591513 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-bin\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591528 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovn-node-metrics-cert\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591543 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/52622e46-d1a9-4b02-8ed1-8130f184b10c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591563 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-var-lib-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591581 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-script-lib\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591611 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591640 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591662 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/16d71936-7f0d-4add-a17b-400840d5fce2-cni-binary-copy\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591686 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-kubelet\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591705 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-ovn\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591719 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-node-log\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591735 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/364afc31-5c38-471a-b645-c6d4388a3dc5-host\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591751 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-kubelet\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591767 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591786 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591802 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-netns\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591818 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-env-overrides\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591833 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-cnibin\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591851 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-netd\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591867 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591883 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxgc\" (UniqueName: \"kubernetes.io/projected/bb476894-9c4f-487a-bfa6-5babb5243c0d-kube-api-access-5rxgc\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591899 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brvl2\" (UniqueName: \"kubernetes.io/projected/47bb43f7-4b26-4fea-9b06-b485aaff253f-kube-api-access-brvl2\") pod \"node-resolver-8pmzp\" (UID: \"47bb43f7-4b26-4fea-9b06-b485aaff253f\") " pod="openshift-dns/node-resolver-8pmzp" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591919 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-system-cni-dir\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591942 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-cni-bin\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591967 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcm5q\" (UniqueName: \"kubernetes.io/projected/16d71936-7f0d-4add-a17b-400840d5fce2-kube-api-access-qcm5q\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591989 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fd71801-391f-4811-992b-e7ca21d72fbd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592009 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592031 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-k8s-cni-cncf-io\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592057 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-hostroot\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592072 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-systemd\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592089 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52622e46-d1a9-4b02-8ed1-8130f184b10c-cni-binary-copy\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592105 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592120 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592140 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592155 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-os-release\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592171 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c0b99f5-8424-4e74-a332-f6dff828c48a-rootfs\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592194 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592210 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-etc-kubernetes\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592234 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-etc-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592258 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9db\" (UniqueName: \"kubernetes.io/projected/8fd71801-391f-4811-992b-e7ca21d72fbd-kube-api-access-tk9db\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592286 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-cni-multus\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592356 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592372 4941 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592388 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592405 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592420 4941 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592437 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592451 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592465 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592496 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592509 4941 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592523 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592537 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.593724 4941 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603560 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.604949 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587793 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606850 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.587917 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588158 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588251 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588333 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588652 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.588935 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589115 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589210 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589338 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589353 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589609 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.589989 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590277 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590360 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.590965 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591076 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591190 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591208 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591255 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591502 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591684 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591797 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.591886 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592007 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592061 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592237 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592266 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592354 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592429 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592659 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.592796 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.593005 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.593032 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.593041 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.594082 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.594165 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.594493 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.594657 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.594578 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.594864 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.594872 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.595044 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.608977 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607217 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607788 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607988 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.610997 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.595213 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.595563 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.595572 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.595608 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.595761 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.595967 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.595997 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.596208 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.596232 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.596560 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.596612 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.596858 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.597027 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.597154 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.597183 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.597367 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.597670 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.598291 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.598674 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.599171 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.599710 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.599999 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600211 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.600426 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:36:00.100403047 +0000 UTC m=+78.361543467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600486 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600542 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600542 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600610 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600696 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600775 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600799 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600825 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.600910 4941 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600926 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.600991 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.601033 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.601081 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.601480 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.601680 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.601728 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.601745 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.601763 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602031 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602044 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602071 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602225 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602290 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602801 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602850 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602874 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.602935 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603083 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603175 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603228 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603232 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603271 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603323 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603546 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603607 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603556 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603840 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.603941 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.604085 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.604236 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.604378 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.604503 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.604045 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.604897 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.605699 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.605763 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.605979 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606141 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606235 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606460 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606555 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606669 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606716 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606810 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.606826 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607036 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607210 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607219 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607364 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607494 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.607943 4941 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.607949 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.610203 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.617779 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:00.117755342 +0000 UTC m=+78.378895762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.618338 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:00.117844575 +0000 UTC m=+78.378984995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.618828 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.618854 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.618870 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.618902 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.619357 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.619385 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.619954 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.619977 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.619990 4941 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.619983 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.620036 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:00.12002221 +0000 UTC m=+78.381162720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620105 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620114 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620136 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620212 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620339 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620393 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620566 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620619 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620696 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.620987 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621130 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621198 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621250 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621361 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621534 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621578 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621634 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.622002 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.622093 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621651 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621669 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621673 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621779 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.621818 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.622205 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.622288 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.622300 4941 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.622335 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:00.122324048 +0000 UTC m=+78.383464558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.622648 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.622702 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.622867 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.622873 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.623085 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.623449 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.623598 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.623764 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.623985 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.624120 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.624225 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.625788 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.626681 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.628876 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.629097 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.630293 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.631361 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.631994 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.632431 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.633981 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.634084 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.634352 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.635206 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.637736 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.639275 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.647263 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.652378 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.653387 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.655189 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.663701 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.669701 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.669742 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.669755 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.669771 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.669783 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:59Z","lastTransitionTime":"2026-02-27T19:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.671033 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.683811 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693104 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-etc-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693149 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9db\" (UniqueName: \"kubernetes.io/projected/8fd71801-391f-4811-992b-e7ca21d72fbd-kube-api-access-tk9db\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693183 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-etc-kubernetes\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693203 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-cni-multus\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693226 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-cni-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693252 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-etc-kubernetes\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693223 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-etc-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693284 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-cnibin\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693308 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-conf-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693461 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-systemd-units\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693502 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-os-release\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693383 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-conf-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693523 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c0b99f5-8424-4e74-a332-f6dff828c48a-mcd-auth-proxy-config\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693326 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-cni-multus\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693542 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/364afc31-5c38-471a-b645-c6d4388a3dc5-serviceca\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693586 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693588 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-cni-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693617 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fd71801-391f-4811-992b-e7ca21d72fbd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693644 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/47bb43f7-4b26-4fea-9b06-b485aaff253f-hosts-file\") pod \"node-resolver-8pmzp\" (UID: \"47bb43f7-4b26-4fea-9b06-b485aaff253f\") " pod="openshift-dns/node-resolver-8pmzp" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693668 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693671 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-os-release\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693688 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c0b99f5-8424-4e74-a332-f6dff828c48a-proxy-tls\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693720 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4pb9\" (UniqueName: \"kubernetes.io/projected/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-kube-api-access-s4pb9\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693740 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8wcb\" (UniqueName: \"kubernetes.io/projected/364afc31-5c38-471a-b645-c6d4388a3dc5-kube-api-access-r8wcb\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693760 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-multus-certs\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693781 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/16d71936-7f0d-4add-a17b-400840d5fce2-multus-daemon-config\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693802 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fd71801-391f-4811-992b-e7ca21d72fbd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693822 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-system-cni-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693842 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-netns\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693863 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-slash\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693892 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-log-socket\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693914 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-config\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693936 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2jq6\" (UniqueName: \"kubernetes.io/projected/52622e46-d1a9-4b02-8ed1-8130f184b10c-kube-api-access-z2jq6\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693960 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xkwj\" (UniqueName: \"kubernetes.io/projected/1c0b99f5-8424-4e74-a332-f6dff828c48a-kube-api-access-5xkwj\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693981 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-socket-dir-parent\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694033 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-var-lib-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694053 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-bin\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694073 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovn-node-metrics-cert\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694094 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/52622e46-d1a9-4b02-8ed1-8130f184b10c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694114 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-kubelet\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694134 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-ovn\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694153 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-script-lib\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694180 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-systemd-units\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694184 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/16d71936-7f0d-4add-a17b-400840d5fce2-cni-binary-copy\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694222 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-kubelet\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694240 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694256 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-node-log\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694272 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/364afc31-5c38-471a-b645-c6d4388a3dc5-host\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694278 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694298 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-env-overrides\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694298 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-log-socket\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694329 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-cnibin\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.693393 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-cnibin\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694360 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-netns\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694381 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-cni-bin\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694425 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcm5q\" (UniqueName: \"kubernetes.io/projected/16d71936-7f0d-4add-a17b-400840d5fce2-kube-api-access-qcm5q\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694444 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-netd\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694735 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/364afc31-5c38-471a-b645-c6d4388a3dc5-serviceca\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694740 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694591 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-netd\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694774 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxgc\" (UniqueName: \"kubernetes.io/projected/bb476894-9c4f-487a-bfa6-5babb5243c0d-kube-api-access-5rxgc\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694798 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brvl2\" (UniqueName: \"kubernetes.io/projected/47bb43f7-4b26-4fea-9b06-b485aaff253f-kube-api-access-brvl2\") pod \"node-resolver-8pmzp\" (UID: \"47bb43f7-4b26-4fea-9b06-b485aaff253f\") " pod="openshift-dns/node-resolver-8pmzp" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694821 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-system-cni-dir\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694844 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694864 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fd71801-391f-4811-992b-e7ca21d72fbd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694871 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-kubelet\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694868 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/16d71936-7f0d-4add-a17b-400840d5fce2-cni-binary-copy\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694888 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-os-release\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694908 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-k8s-cni-cncf-io\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694926 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-node-log\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694934 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-hostroot\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694948 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/364afc31-5c38-471a-b645-c6d4388a3dc5-host\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694955 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-systemd\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694976 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52622e46-d1a9-4b02-8ed1-8130f184b10c-cni-binary-copy\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694998 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695012 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-system-cni-dir\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695022 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695044 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c0b99f5-8424-4e74-a332-f6dff828c48a-rootfs\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695150 4941 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695166 4941 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695179 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695191 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695203 4941 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695213 4941 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695227 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695238 4941 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695250 4941 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695262 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695274 4941 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695285 4941 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695296 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695310 4941 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695313 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/16d71936-7f0d-4add-a17b-400840d5fce2-multus-daemon-config\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695322 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695349 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c0b99f5-8424-4e74-a332-f6dff828c48a-rootfs\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695353 4941 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695383 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695396 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695407 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695420 4941 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695434 4941 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695438 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-config\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695445 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-netns\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695447 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695414 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fd71801-391f-4811-992b-e7ca21d72fbd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694463 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/47bb43f7-4b26-4fea-9b06-b485aaff253f-hosts-file\") pod \"node-resolver-8pmzp\" (UID: \"47bb43f7-4b26-4fea-9b06-b485aaff253f\") " pod="openshift-dns/node-resolver-8pmzp" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695492 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695541 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-multus-socket-dir-parent\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695550 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-os-release\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695571 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-var-lib-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695579 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-k8s-cni-cncf-io\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694913 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-openvswitch\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695621 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-hostroot\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695647 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-systemd\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.695901 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fd71801-391f-4811-992b-e7ca21d72fbd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694630 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-cnibin\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694672 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-cni-bin\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694652 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-netns\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694844 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-env-overrides\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.694127 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.696274 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52622e46-d1a9-4b02-8ed1-8130f184b10c-cni-binary-copy\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.696811 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/52622e46-d1a9-4b02-8ed1-8130f184b10c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.696847 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-system-cni-dir\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.696902 4941 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.696895 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52622e46-d1a9-4b02-8ed1-8130f184b10c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: E0227 19:35:59.696950 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs podName:68a8b3ac-f7b7-412b-8c30-96c44ba947c9 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:00.196934573 +0000 UTC m=+78.458074993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs") pod "network-metrics-daemon-mvmp7" (UID: "68a8b3ac-f7b7-412b-8c30-96c44ba947c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.696954 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.696976 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-slash\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697003 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-script-lib\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697030 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-var-lib-kubelet\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697066 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697091 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-ovn\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697120 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-bin\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697232 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/16d71936-7f0d-4add-a17b-400840d5fce2-host-run-multus-certs\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697338 4941 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697356 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697371 4941 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697384 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697397 4941 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697411 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697424 4941 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697435 4941 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697448 4941 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697525 4941 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697541 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697555 4941 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697579 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697594 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697605 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697617 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697630 4941 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697641 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697653 4941 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697665 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697672 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c0b99f5-8424-4e74-a332-f6dff828c48a-mcd-auth-proxy-config\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697678 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697723 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697738 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697751 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697762 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697774 4941 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697786 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697798 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697810 4941 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697821 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697833 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697844 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697857 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697872 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697883 4941 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697895 4941 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697908 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697918 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697929 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697940 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697952 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697963 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697976 4941 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.697988 4941 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698035 4941 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698047 4941 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698058 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698070 4941 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698082 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698094 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698092 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fd71801-391f-4811-992b-e7ca21d72fbd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698104 4941 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698141 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698153 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698166 4941 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698179 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698191 4941 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698203 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698215 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698227 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698239 4941 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698251 4941 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698263 4941 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698275 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698288 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698301 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698313 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698326 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698338 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698350 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698362 4941 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698374 4941 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698387 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698398 4941 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698412 4941 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698427 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698440 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698453 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698501 4941 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698517 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698529 4941 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698541 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698553 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698567 4941 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698579 4941 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698592 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698608 4941 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698621 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.698634 4941 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702589 4941 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702605 4941 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702617 4941 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702629 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702641 4941 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702654 4941 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702667 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702679 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702765 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702779 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702790 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702802 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702812 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702824 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702836 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702849 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702860 4941 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702874 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702885 4941 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702895 4941 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702906 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702916 4941 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702927 4941 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702938 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702950 4941 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702961 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702971 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702983 4941 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.702995 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703008 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703020 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703032 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703045 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703056 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703070 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703081 4941 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703092 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703103 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703116 4941 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703127 4941 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703139 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703151 4941 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703164 4941 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703175 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703187 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703199 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703211 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703223 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703234 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703246 4941 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703258 4941 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703269 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703281 4941 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703293 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703304 4941 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703317 4941 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703327 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703339 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703352 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703364 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703375 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703387 4941 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703398 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703408 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703419 4941 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703430 4941 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703442 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703453 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703465 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703499 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703509 4941 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.703521 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.707233 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovn-node-metrics-cert\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.707547 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c0b99f5-8424-4e74-a332-f6dff828c48a-proxy-tls\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.707823 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.711318 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brvl2\" (UniqueName: \"kubernetes.io/projected/47bb43f7-4b26-4fea-9b06-b485aaff253f-kube-api-access-brvl2\") pod \"node-resolver-8pmzp\" (UID: \"47bb43f7-4b26-4fea-9b06-b485aaff253f\") " pod="openshift-dns/node-resolver-8pmzp" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.711656 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcm5q\" (UniqueName: \"kubernetes.io/projected/16d71936-7f0d-4add-a17b-400840d5fce2-kube-api-access-qcm5q\") pod \"multus-lt4bk\" (UID: \"16d71936-7f0d-4add-a17b-400840d5fce2\") " pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.711791 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxgc\" (UniqueName: \"kubernetes.io/projected/bb476894-9c4f-487a-bfa6-5babb5243c0d-kube-api-access-5rxgc\") pod \"ovnkube-node-v74b7\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.715355 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2jq6\" (UniqueName: \"kubernetes.io/projected/52622e46-d1a9-4b02-8ed1-8130f184b10c-kube-api-access-z2jq6\") pod \"multus-additional-cni-plugins-wbhlr\" (UID: \"52622e46-d1a9-4b02-8ed1-8130f184b10c\") " pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.716829 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8wcb\" (UniqueName: \"kubernetes.io/projected/364afc31-5c38-471a-b645-c6d4388a3dc5-kube-api-access-r8wcb\") pod \"node-ca-xr6t6\" (UID: \"364afc31-5c38-471a-b645-c6d4388a3dc5\") " pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.718481 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xkwj\" (UniqueName: \"kubernetes.io/projected/1c0b99f5-8424-4e74-a332-f6dff828c48a-kube-api-access-5xkwj\") pod \"machine-config-daemon-hj7qr\" (UID: \"1c0b99f5-8424-4e74-a332-f6dff828c48a\") " pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.719380 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9db\" (UniqueName: \"kubernetes.io/projected/8fd71801-391f-4811-992b-e7ca21d72fbd-kube-api-access-tk9db\") pod \"ovnkube-control-plane-749d76644c-9fhpx\" (UID: \"8fd71801-391f-4811-992b-e7ca21d72fbd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.720786 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4pb9\" (UniqueName: \"kubernetes.io/projected/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-kube-api-access-s4pb9\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.771200 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.772687 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.772750 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.772773 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.772798 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.772816 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:59Z","lastTransitionTime":"2026-02-27T19:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.785720 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 19:35:59 crc kubenswrapper[4941]: W0227 19:35:59.795197 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-cea3a3995cbd8b6e366e15d62365698f7e9c2abc41127edd0e5efffcc93cebe1 WatchSource:0}: Error finding container cea3a3995cbd8b6e366e15d62365698f7e9c2abc41127edd0e5efffcc93cebe1: Status 404 returned error can't find the container with id cea3a3995cbd8b6e366e15d62365698f7e9c2abc41127edd0e5efffcc93cebe1 Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.801085 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.817333 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lt4bk" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.829121 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" Feb 27 19:35:59 crc kubenswrapper[4941]: W0227 19:35:59.836120 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d71936_7f0d_4add_a17b_400840d5fce2.slice/crio-e0a289a38813b777f444710aeca585d332fdf28a8eadac1fdb9275f7b3174c66 WatchSource:0}: Error finding container e0a289a38813b777f444710aeca585d332fdf28a8eadac1fdb9275f7b3174c66: Status 404 returned error can't find the container with id e0a289a38813b777f444710aeca585d332fdf28a8eadac1fdb9275f7b3174c66 Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.839280 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8pmzp" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.849991 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:35:59 crc kubenswrapper[4941]: W0227 19:35:59.852503 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd71801_391f_4811_992b_e7ca21d72fbd.slice/crio-67a6964fd897f828d083d41e582237ac34dba46bedc6338cf141105129620a94 WatchSource:0}: Error finding container 67a6964fd897f828d083d41e582237ac34dba46bedc6338cf141105129620a94: Status 404 returned error can't find the container with id 67a6964fd897f828d083d41e582237ac34dba46bedc6338cf141105129620a94 Feb 27 19:35:59 crc kubenswrapper[4941]: W0227 19:35:59.861908 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47bb43f7_4b26_4fea_9b06_b485aaff253f.slice/crio-2462b7a3056317b7b288dbe1b2c7f479b93752bb53ffccc80e4cb2f7b670c54d WatchSource:0}: Error finding container 2462b7a3056317b7b288dbe1b2c7f479b93752bb53ffccc80e4cb2f7b670c54d: Status 404 returned error can't find the container with id 2462b7a3056317b7b288dbe1b2c7f479b93752bb53ffccc80e4cb2f7b670c54d Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.867558 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:35:59 crc kubenswrapper[4941]: W0227 19:35:59.867944 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c0b99f5_8424_4e74_a332_f6dff828c48a.slice/crio-3798b079be17c3819c3c2df29b7a384cb56b1a5ce55df19e18a46275c8033ffd WatchSource:0}: Error finding container 3798b079be17c3819c3c2df29b7a384cb56b1a5ce55df19e18a46275c8033ffd: Status 404 returned error can't find the container with id 3798b079be17c3819c3c2df29b7a384cb56b1a5ce55df19e18a46275c8033ffd Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.874522 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xr6t6" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.874921 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.874951 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.874964 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.874979 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.874991 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:59Z","lastTransitionTime":"2026-02-27T19:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.881729 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" Feb 27 19:35:59 crc kubenswrapper[4941]: W0227 19:35:59.911826 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb476894_9c4f_487a_bfa6_5babb5243c0d.slice/crio-38de59a0fe679fe6441797a6ff231bc7c5d1b85589c3c666cf0c687f5d2e9dba WatchSource:0}: Error finding container 38de59a0fe679fe6441797a6ff231bc7c5d1b85589c3c666cf0c687f5d2e9dba: Status 404 returned error can't find the container with id 38de59a0fe679fe6441797a6ff231bc7c5d1b85589c3c666cf0c687f5d2e9dba Feb 27 19:35:59 crc kubenswrapper[4941]: W0227 19:35:59.915318 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod364afc31_5c38_471a_b645_c6d4388a3dc5.slice/crio-2fdda1bd121db80f96e36a24f0c5e361183a578e881cf2eae5c043bf6c99ae02 WatchSource:0}: Error finding container 2fdda1bd121db80f96e36a24f0c5e361183a578e881cf2eae5c043bf6c99ae02: Status 404 returned error can't find the container with id 2fdda1bd121db80f96e36a24f0c5e361183a578e881cf2eae5c043bf6c99ae02 Feb 27 19:35:59 crc kubenswrapper[4941]: W0227 19:35:59.916766 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52622e46_d1a9_4b02_8ed1_8130f184b10c.slice/crio-f666e787c2320b4c50fc8a0c4cb14a194192ab79e1e1ed9354da33ab14d4a57e WatchSource:0}: Error finding container f666e787c2320b4c50fc8a0c4cb14a194192ab79e1e1ed9354da33ab14d4a57e: Status 404 returned error can't find the container with id f666e787c2320b4c50fc8a0c4cb14a194192ab79e1e1ed9354da33ab14d4a57e Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.982688 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.982726 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.982736 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.982752 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:35:59 crc kubenswrapper[4941]: I0227 19:35:59.982765 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:35:59Z","lastTransitionTime":"2026-02-27T19:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.084813 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.085156 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.085171 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.085187 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.085199 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.106531 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.106711 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:36:01.106689808 +0000 UTC m=+79.367830238 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.188556 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.188590 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.188598 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.188612 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.188622 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.207328 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.207404 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.207460 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207482 4941 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.207508 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.207537 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207560 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:01.207541832 +0000 UTC m=+79.468682252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207687 4941 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207767 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs podName:68a8b3ac-f7b7-412b-8c30-96c44ba947c9 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:01.207748848 +0000 UTC m=+79.468889258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs") pod "network-metrics-daemon-mvmp7" (UID: "68a8b3ac-f7b7-412b-8c30-96c44ba947c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207771 4941 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207702 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207833 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207840 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207847 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:01.207830671 +0000 UTC m=+79.468971141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207855 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207849 4941 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207924 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:01.207913163 +0000 UTC m=+79.469053673 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.207870 4941 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.208002 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:01.207982125 +0000 UTC m=+79.469122545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.290177 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.290219 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.290232 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.290251 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.290266 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.393030 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.393067 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.393076 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.393090 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.393099 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.471492 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.472131 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.473088 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.473701 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.474326 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.474807 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.475456 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.476002 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.476670 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.477168 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.477693 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.478309 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.478803 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.479335 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.479868 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.480416 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.482947 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.483852 4941 scope.go:117] "RemoveContainer" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.483861 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.484182 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.486319 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.487676 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.488626 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.492863 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.494055 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.497164 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.497671 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.497691 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.497698 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.497711 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.497721 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.498103 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.500338 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.502094 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.503439 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.505405 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.506075 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.507077 4941 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.507222 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.508871 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.510091 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.510572 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.512301 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.512977 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.514145 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.514809 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.517606 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.518453 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.520440 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.521649 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.523435 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.524520 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.525664 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.526707 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.530709 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.532248 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.534005 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.534595 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.535391 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.536680 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.537461 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.538327 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.600361 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.600402 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.600414 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.600431 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.600442 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.703060 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.703112 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.703127 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.703148 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.703164 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.759599 4941 generic.go:334] "Generic (PLEG): container finished" podID="52622e46-d1a9-4b02-8ed1-8130f184b10c" containerID="6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641" exitCode=0 Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.759696 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" event={"ID":"52622e46-d1a9-4b02-8ed1-8130f184b10c","Type":"ContainerDied","Data":"6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.759773 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" event={"ID":"52622e46-d1a9-4b02-8ed1-8130f184b10c","Type":"ContainerStarted","Data":"f666e787c2320b4c50fc8a0c4cb14a194192ab79e1e1ed9354da33ab14d4a57e"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.761508 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.761535 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"65b3545b2783ccb54d8f55c74984874b74764c261c805f607d30d0e5fe4e423e"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.763626 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" event={"ID":"8fd71801-391f-4811-992b-e7ca21d72fbd","Type":"ContainerStarted","Data":"7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.763674 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" event={"ID":"8fd71801-391f-4811-992b-e7ca21d72fbd","Type":"ContainerStarted","Data":"c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.763684 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" event={"ID":"8fd71801-391f-4811-992b-e7ca21d72fbd","Type":"ContainerStarted","Data":"67a6964fd897f828d083d41e582237ac34dba46bedc6338cf141105129620a94"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.768379 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.768448 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.768513 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"3798b079be17c3819c3c2df29b7a384cb56b1a5ce55df19e18a46275c8033ffd"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.769619 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8pmzp" event={"ID":"47bb43f7-4b26-4fea-9b06-b485aaff253f","Type":"ContainerStarted","Data":"36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.769662 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8pmzp" event={"ID":"47bb43f7-4b26-4fea-9b06-b485aaff253f","Type":"ContainerStarted","Data":"2462b7a3056317b7b288dbe1b2c7f479b93752bb53ffccc80e4cb2f7b670c54d"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.770981 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lt4bk" event={"ID":"16d71936-7f0d-4add-a17b-400840d5fce2","Type":"ContainerStarted","Data":"672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.771015 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lt4bk" event={"ID":"16d71936-7f0d-4add-a17b-400840d5fce2","Type":"ContainerStarted","Data":"e0a289a38813b777f444710aeca585d332fdf28a8eadac1fdb9275f7b3174c66"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.777012 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.777064 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.777082 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7cd3823ae6d5904db9a94c570584f0a1ccc50ad756aa1a4e710c49f9a0597685"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.779600 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cea3a3995cbd8b6e366e15d62365698f7e9c2abc41127edd0e5efffcc93cebe1"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.782772 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xr6t6" event={"ID":"364afc31-5c38-471a-b645-c6d4388a3dc5","Type":"ContainerStarted","Data":"06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.782820 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xr6t6" event={"ID":"364afc31-5c38-471a-b645-c6d4388a3dc5","Type":"ContainerStarted","Data":"2fdda1bd121db80f96e36a24f0c5e361183a578e881cf2eae5c043bf6c99ae02"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.784852 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3" exitCode=0 Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.784939 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.784984 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"38de59a0fe679fe6441797a6ff231bc7c5d1b85589c3c666cf0c687f5d2e9dba"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.785507 4941 scope.go:117] "RemoveContainer" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" Feb 27 19:36:00 crc kubenswrapper[4941]: E0227 19:36:00.785676 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.786591 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.806754 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.806801 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.806814 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.806857 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.806873 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.812307 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.830237 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.844432 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.856897 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.869519 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.879482 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.891065 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.901784 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.909159 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.909198 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.909208 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.909225 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.909238 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:00Z","lastTransitionTime":"2026-02-27T19:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.915317 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.926129 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.937390 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.948721 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.962304 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.976291 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:00 crc kubenswrapper[4941]: I0227 19:36:00.990454 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.002485 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.012913 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.012978 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.012991 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.013009 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.013022 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.017762 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.038326 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.052275 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.066084 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.076813 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.096121 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.110958 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.115587 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.115622 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.115633 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.115645 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.115654 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.117106 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.117206 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:36:03.117181987 +0000 UTC m=+81.378322407 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.123692 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.135244 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.150021 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.162737 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.174980 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.191765 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.220966 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221021 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221050 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221073 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221095 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221205 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221220 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221222 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221243 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221260 4941 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221279 4941 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221308 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:03.221289557 +0000 UTC m=+81.482429977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221323 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs podName:68a8b3ac-f7b7-412b-8c30-96c44ba947c9 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:03.221316328 +0000 UTC m=+81.482456748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs") pod "network-metrics-daemon-mvmp7" (UID: "68a8b3ac-f7b7-412b-8c30-96c44ba947c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221349 4941 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221361 4941 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221233 4941 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221389 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:03.22137076 +0000 UTC m=+81.482511180 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221403 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:03.2213976 +0000 UTC m=+81.482538020 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.221415 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:03.221409971 +0000 UTC m=+81.482550391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221541 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221565 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221576 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221595 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.221604 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.323673 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.323707 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.323716 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.323729 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.323738 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.425522 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.425570 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.425581 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.425598 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.425611 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.466399 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.466464 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.466576 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.466604 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.466724 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.466872 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.466971 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:01 crc kubenswrapper[4941]: E0227 19:36:01.467035 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.527770 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.527812 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.527820 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.527835 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.527845 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.629579 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.629900 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.629911 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.629925 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.629933 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.732232 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.732280 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.732292 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.732309 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.732321 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.789820 4941 generic.go:334] "Generic (PLEG): container finished" podID="52622e46-d1a9-4b02-8ed1-8130f184b10c" containerID="1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1" exitCode=0 Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.789907 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" event={"ID":"52622e46-d1a9-4b02-8ed1-8130f184b10c","Type":"ContainerDied","Data":"1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.794661 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.794689 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.794698 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.816107 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.829850 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.837141 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.837174 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.837188 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.837203 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.837214 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.847395 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.868926 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.880057 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.892622 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.904133 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.916391 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.930803 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.942433 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.942780 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.942792 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.942809 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.942821 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:01Z","lastTransitionTime":"2026-02-27T19:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.947178 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.958514 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.968767 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.983890 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:01 crc kubenswrapper[4941]: I0227 19:36:01.997527 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:01Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.010303 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.051101 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.051129 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.051138 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.051152 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.051161 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.153014 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.153053 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.153064 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.153080 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.153092 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.255277 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.255306 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.255314 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.255327 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.255336 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.356897 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.356938 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.356950 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.356966 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.356976 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.459425 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.459457 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.459467 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.459506 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.459517 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.480220 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.507299 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.521727 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.533039 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.544165 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.555073 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.561174 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.561228 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.561242 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.561263 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.561278 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.572932 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.586114 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.599070 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.609543 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.619636 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.630632 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.639716 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.650273 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.662743 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.663332 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.663408 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.663502 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.663565 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.663618 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.766257 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.766289 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.766297 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.766312 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.766321 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.774193 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.774235 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.774250 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.774273 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.774298 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: E0227 19:36:02.789598 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.793119 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.793160 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.793172 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.793188 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.793201 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.799976 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.800220 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.800287 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.802054 4941 generic.go:334] "Generic (PLEG): container finished" podID="52622e46-d1a9-4b02-8ed1-8130f184b10c" containerID="4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf" exitCode=0 Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.802126 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" event={"ID":"52622e46-d1a9-4b02-8ed1-8130f184b10c","Type":"ContainerDied","Data":"4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.804569 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab"} Feb 27 19:36:02 crc kubenswrapper[4941]: E0227 19:36:02.808975 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.812795 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.812831 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.812840 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.812853 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.812861 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.817308 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: E0227 19:36:02.826194 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.832560 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.832777 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.832792 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.832802 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.832815 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.832825 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: E0227 19:36:02.843661 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.845536 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.846643 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.846670 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.846678 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.846691 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.846700 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.859052 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: E0227 19:36:02.859377 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: E0227 19:36:02.859654 4941 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.868923 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.868969 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.868978 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.868993 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.869002 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.871617 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.886746 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.900944 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.916195 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.929284 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.941784 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.954963 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.994657 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.998692 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.998722 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.998731 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.998745 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:02 crc kubenswrapper[4941]: I0227 19:36:02.998755 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:02Z","lastTransitionTime":"2026-02-27T19:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.011776 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.032399 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.044993 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.057060 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.066901 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.079394 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.096975 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.100337 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.100609 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.100753 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.100859 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.100959 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.110742 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.123615 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.138095 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.138888 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.139074 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:36:07.139045432 +0000 UTC m=+85.400185842 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.156693 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.170175 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.183267 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.198460 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.203712 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.203775 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.203784 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.203797 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.203806 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.213563 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.223482 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.239807 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.239851 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.239874 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.239899 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.239920 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.239962 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.239999 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240012 4941 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240014 4941 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240055 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:07.24004146 +0000 UTC m=+85.501181880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240070 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:07.240063781 +0000 UTC m=+85.501204201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240072 4941 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240161 4941 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240276 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:07.240245646 +0000 UTC m=+85.501386166 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240368 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs podName:68a8b3ac-f7b7-412b-8c30-96c44ba947c9 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:07.240343869 +0000 UTC m=+85.501484439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs") pod "network-metrics-daemon-mvmp7" (UID: "68a8b3ac-f7b7-412b-8c30-96c44ba947c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240484 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240503 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240517 4941 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.240422 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.240571 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:07.240559996 +0000 UTC m=+85.501700566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.252898 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.307065 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.307107 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.307118 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.307135 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.307146 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.410109 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.410156 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.410166 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.410183 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.410192 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.467044 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.467109 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.467150 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.467209 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.467215 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.467337 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.467539 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:03 crc kubenswrapper[4941]: E0227 19:36:03.467757 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.512412 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.512463 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.512505 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.512526 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.512540 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.615627 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.615701 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.615718 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.616148 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.616210 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.718797 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.718824 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.718832 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.718845 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.718856 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.809919 4941 generic.go:334] "Generic (PLEG): container finished" podID="52622e46-d1a9-4b02-8ed1-8130f184b10c" containerID="c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97" exitCode=0 Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.810009 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" event={"ID":"52622e46-d1a9-4b02-8ed1-8130f184b10c","Type":"ContainerDied","Data":"c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.822070 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.822131 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.822149 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.822174 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.822193 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.826985 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.836864 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.856141 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.871423 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.887185 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.910455 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.926784 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.926845 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.926859 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.926888 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.926906 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:03Z","lastTransitionTime":"2026-02-27T19:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.927360 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.942800 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.957969 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.972418 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.985615 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:03 crc kubenswrapper[4941]: I0227 19:36:03.998261 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:03Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.010211 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.022993 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.029206 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.029239 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.029248 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.029262 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.029272 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.034968 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.132367 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.132433 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.132446 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.132472 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.132511 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.235672 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.235720 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.235731 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.235749 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.235763 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.338931 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.339010 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.339025 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.339049 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.339068 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.441531 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.441565 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.441574 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.441589 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.441598 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.544509 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.544548 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.544561 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.544579 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.544592 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.646914 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.646949 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.646957 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.646969 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.646978 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.749375 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.749424 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.749434 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.749452 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.749467 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.819901 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.823594 4941 generic.go:334] "Generic (PLEG): container finished" podID="52622e46-d1a9-4b02-8ed1-8130f184b10c" containerID="1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b" exitCode=0 Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.823639 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" event={"ID":"52622e46-d1a9-4b02-8ed1-8130f184b10c","Type":"ContainerDied","Data":"1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.837700 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.849128 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.852107 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.852158 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.852168 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.852184 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.852195 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.861572 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.878003 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.893351 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.914003 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.925065 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.936284 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.946817 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.954873 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.954919 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.954931 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.954947 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.954958 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:04Z","lastTransitionTime":"2026-02-27T19:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.958144 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.969264 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.984220 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:04 crc kubenswrapper[4941]: I0227 19:36:04.997705 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.009694 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.019046 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.056800 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.056845 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.056864 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.056881 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.056892 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.159915 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.159953 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.159961 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.159974 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.159982 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.263590 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.263642 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.263662 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.263687 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.263704 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.366383 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.366465 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.366529 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.366567 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.366596 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.466236 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.466278 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.466291 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.466268 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:05 crc kubenswrapper[4941]: E0227 19:36:05.466481 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:05 crc kubenswrapper[4941]: E0227 19:36:05.467074 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:05 crc kubenswrapper[4941]: E0227 19:36:05.467174 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:05 crc kubenswrapper[4941]: E0227 19:36:05.467277 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.469138 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.469182 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.469195 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.469218 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.469232 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.572346 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.572405 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.572422 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.572453 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.572495 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.676033 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.676091 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.676106 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.676134 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.676150 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.778871 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.778914 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.778923 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.778940 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.778988 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.830540 4941 generic.go:334] "Generic (PLEG): container finished" podID="52622e46-d1a9-4b02-8ed1-8130f184b10c" containerID="5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559" exitCode=0 Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.830626 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" event={"ID":"52622e46-d1a9-4b02-8ed1-8130f184b10c","Type":"ContainerDied","Data":"5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.847262 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.862223 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.879013 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.882107 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.882153 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.882165 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.882186 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.882199 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.893276 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.909599 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.927361 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.945274 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.959929 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.974948 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.985720 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.986293 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.986307 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.986332 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.986348 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:05Z","lastTransitionTime":"2026-02-27T19:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:05 crc kubenswrapper[4941]: I0227 19:36:05.990526 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:05Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.002805 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.026543 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.042880 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.061571 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.078034 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.090567 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.090609 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.090621 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.090645 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.090659 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.193357 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.193430 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.193443 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.193464 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.193858 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.298237 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.298276 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.298284 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.298299 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.298311 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.401664 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.401756 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.401779 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.401817 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.401841 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.504681 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.504739 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.504760 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.504779 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.504791 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.607026 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.607068 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.607078 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.607092 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.607101 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.710347 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.710417 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.710439 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.710459 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.710495 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.813034 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.813089 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.813101 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.813123 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.813134 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.839719 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" event={"ID":"52622e46-d1a9-4b02-8ed1-8130f184b10c","Type":"ContainerStarted","Data":"45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.845382 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.845820 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.845911 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.845928 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.860848 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.877428 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.900310 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.903941 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.904669 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.915616 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.915678 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.915694 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.915717 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.915763 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:06Z","lastTransitionTime":"2026-02-27T19:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.922700 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.942555 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.956622 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.974556 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:06 crc kubenswrapper[4941]: I0227 19:36:06.989114 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.002952 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:06Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.019517 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.019598 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.019612 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.019635 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.019651 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.026720 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.047093 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.065414 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.080582 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.102360 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.119214 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.123014 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.123058 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.123067 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.123083 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.123097 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.137667 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.158135 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.176200 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.189385 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.202549 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.202738 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:36:15.202711834 +0000 UTC m=+93.463852244 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.207233 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.221502 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.225053 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.225199 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.225282 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.225375 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.225488 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.237459 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.258346 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.275977 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.289840 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.303809 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.303929 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.303963 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.303997 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.304032 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304267 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304306 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304320 4941 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304382 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:15.304361561 +0000 UTC m=+93.565501981 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304454 4941 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304558 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs podName:68a8b3ac-f7b7-412b-8c30-96c44ba947c9 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:15.304534576 +0000 UTC m=+93.565674996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs") pod "network-metrics-daemon-mvmp7" (UID: "68a8b3ac-f7b7-412b-8c30-96c44ba947c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304620 4941 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304661 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:15.30464961 +0000 UTC m=+93.565790270 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304742 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304765 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304771 4941 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304780 4941 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304831 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:15.304819355 +0000 UTC m=+93.565960015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.304854 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:15.304845726 +0000 UTC m=+93.565986396 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.308977 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.324842 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.329378 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.329439 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.329450 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.329498 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.329531 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.342536 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.362826 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.377194 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:07Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.433046 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.433123 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.433142 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.433172 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.433194 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.466684 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.466722 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.466845 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.466856 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.466892 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.467019 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.467104 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:07 crc kubenswrapper[4941]: E0227 19:36:07.467152 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.536398 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.536463 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.536502 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.536523 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.536534 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.639295 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.639331 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.639339 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.639353 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.639362 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.741313 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.741341 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.741349 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.741360 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.741372 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.843740 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.843776 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.843807 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.843825 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.843837 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.945941 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.945985 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.946003 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.946028 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:07 crc kubenswrapper[4941]: I0227 19:36:07.946038 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:07Z","lastTransitionTime":"2026-02-27T19:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.047945 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.047974 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.047983 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.047997 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.048006 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.150570 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.150785 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.150872 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.150949 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.151059 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.253278 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.253541 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.253828 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.253908 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.253966 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.356439 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.356495 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.356507 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.356522 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.356533 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.458227 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.458265 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.458274 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.458289 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.458299 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.560367 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.560630 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.560701 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.560762 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.560824 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.663603 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.663866 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.663950 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.664050 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.664139 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.766631 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.766889 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.766952 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.767014 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.767076 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.869720 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.869767 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.869776 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.869794 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.869805 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.972446 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.972769 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.972848 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.972927 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:08 crc kubenswrapper[4941]: I0227 19:36:08.973022 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:08Z","lastTransitionTime":"2026-02-27T19:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.075941 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.075977 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.075986 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.075999 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.076011 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.177638 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.177686 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.177703 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.177722 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.177735 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.280355 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.280420 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.280431 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.280450 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.280466 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.383050 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.383095 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.383107 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.383123 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.383134 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.466905 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.466964 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.466913 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.466947 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:09 crc kubenswrapper[4941]: E0227 19:36:09.467400 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:09 crc kubenswrapper[4941]: E0227 19:36:09.467518 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:09 crc kubenswrapper[4941]: E0227 19:36:09.467586 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:09 crc kubenswrapper[4941]: E0227 19:36:09.467713 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.479094 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.486841 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.486894 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.486906 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.486925 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.486939 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.590300 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.590364 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.590377 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.590401 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.590417 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.692497 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.692731 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.692819 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.692911 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.693020 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.796632 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.796686 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.796703 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.796722 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.796741 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.856725 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/0.log" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.859951 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8" exitCode=1 Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.860034 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.861258 4941 scope.go:117] "RemoveContainer" containerID="ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.881138 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:09Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.897924 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:09Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.903686 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.903719 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.903727 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.903741 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.903753 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:09Z","lastTransitionTime":"2026-02-27T19:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.920160 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:09Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.941256 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:09Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.957988 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:09Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.973137 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:09Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:09 crc kubenswrapper[4941]: I0227 19:36:09.991781 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:09Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.006559 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.006994 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.007039 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.007050 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.007072 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.007085 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.021363 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.036558 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.068355 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.109056 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:09Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 19:36:09.148190 6552 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 19:36:09.148501 6552 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 19:36:09.148714 6552 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 19:36:09.148763 6552 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 19:36:09.148773 6552 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 19:36:09.148785 6552 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0227 19:36:09.148790 6552 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 19:36:09.148803 6552 factory.go:656] Stopping watch factory\\\\nI0227 19:36:09.148807 6552 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 19:36:09.148814 6552 ovnkube.go:599] Stopped ovnkube\\\\nI0227 19:36:09.148822 6552 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 19:36:09.148832 6552 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 19:36:09.148845 6552 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.109586 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.109616 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.109625 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.109638 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.109648 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.129739 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.145766 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.161785 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.174669 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.212960 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.213009 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.213018 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.213038 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.213050 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.315837 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.315865 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.315876 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.315892 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.315903 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.418032 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.418066 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.418075 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.418090 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.418100 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.520586 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.520630 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.520639 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.520654 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.520695 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.623561 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.623597 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.623609 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.623626 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.623637 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.726528 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.726560 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.726571 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.726591 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.726602 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.829335 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.829371 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.829380 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.829401 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.829414 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.865060 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/1.log" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.865745 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/0.log" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.868030 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f" exitCode=1 Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.868075 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.868111 4941 scope.go:117] "RemoveContainer" containerID="ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.868901 4941 scope.go:117] "RemoveContainer" containerID="14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f" Feb 27 19:36:10 crc kubenswrapper[4941]: E0227 19:36:10.869111 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.883606 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.892357 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.906039 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.918280 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.931808 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.931833 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.931842 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.931856 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.931865 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:10Z","lastTransitionTime":"2026-02-27T19:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.932054 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.945668 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.957337 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.969267 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.978826 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:10 crc kubenswrapper[4941]: I0227 19:36:10.987890 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:10Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.004012 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:09Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 19:36:09.148190 6552 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 19:36:09.148501 6552 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 19:36:09.148714 6552 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 19:36:09.148763 6552 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 19:36:09.148773 6552 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 19:36:09.148785 6552 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0227 19:36:09.148790 6552 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 19:36:09.148803 6552 factory.go:656] Stopping watch factory\\\\nI0227 19:36:09.148807 6552 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 19:36:09.148814 6552 ovnkube.go:599] Stopped ovnkube\\\\nI0227 19:36:09.148822 6552 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 19:36:09.148832 6552 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 19:36:09.148845 6552 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:10Z\\\",\\\"message\\\":\\\"es.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0227 19:36:10.719167 6902 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inf\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:11Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.016548 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:11Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.027196 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:11Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.035004 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.035032 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.035040 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.035053 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.035064 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.036420 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:11Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.048725 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:11Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.061616 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:11Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.137769 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.137802 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.137815 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.137830 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.137841 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.241926 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.241989 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.242005 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.242027 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.242044 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.344984 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.345051 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.345067 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.345095 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.345117 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.447640 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.447672 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.447680 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.447692 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.447701 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.466714 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:11 crc kubenswrapper[4941]: E0227 19:36:11.466928 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.467695 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:11 crc kubenswrapper[4941]: E0227 19:36:11.467846 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.467969 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:11 crc kubenswrapper[4941]: E0227 19:36:11.468116 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.468230 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:11 crc kubenswrapper[4941]: E0227 19:36:11.468352 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.550326 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.550368 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.550383 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.550403 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.550419 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.653382 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.653415 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.653425 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.653441 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.653451 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.756376 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.756429 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.756439 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.756454 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.756463 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.860042 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.860099 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.860107 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.860122 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.860131 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.872590 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/1.log" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.961835 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.961897 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.961915 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.961940 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:11 crc kubenswrapper[4941]: I0227 19:36:11.961959 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:11Z","lastTransitionTime":"2026-02-27T19:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.064448 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.064497 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.064505 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.064519 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.064528 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.167242 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.167264 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.167273 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.167287 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.167295 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.270029 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.270087 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.270098 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.270112 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.270121 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.371821 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.371853 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.371862 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.371875 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.371883 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.474599 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.474679 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.474694 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.474714 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.474732 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.479111 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.491581 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.502931 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.514588 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.532538 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:09Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 19:36:09.148190 6552 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 19:36:09.148501 6552 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 19:36:09.148714 6552 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 19:36:09.148763 6552 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 19:36:09.148773 6552 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 19:36:09.148785 6552 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0227 19:36:09.148790 6552 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 19:36:09.148803 6552 factory.go:656] Stopping watch factory\\\\nI0227 19:36:09.148807 6552 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 19:36:09.148814 6552 ovnkube.go:599] Stopped ovnkube\\\\nI0227 19:36:09.148822 6552 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 19:36:09.148832 6552 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 19:36:09.148845 6552 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:10Z\\\",\\\"message\\\":\\\"es.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0227 19:36:10.719167 6902 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inf\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.547415 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.556908 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.573811 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.577530 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.577557 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.577567 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.577583 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.577594 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.586581 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.596860 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.610693 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.622040 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.635666 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.646529 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.658021 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.667300 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:12Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.680269 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.680299 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.680310 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.680324 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.680333 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.782999 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.783029 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.783046 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.783059 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.783068 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.885411 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.885545 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.885571 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.885601 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.885625 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.987923 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.987969 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.987981 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.987998 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:12 crc kubenswrapper[4941]: I0227 19:36:12.988012 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:12Z","lastTransitionTime":"2026-02-27T19:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.091458 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.091521 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.091531 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.091544 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.091556 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.148725 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.148764 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.148775 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.148792 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.148805 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.161002 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:13Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.164611 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.164675 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.164684 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.164699 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.164708 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.180539 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:13Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.184021 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.184052 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.184061 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.184077 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.184087 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.196030 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:13Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.198963 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.198999 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.199009 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.199022 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.199031 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.210783 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:13Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.214581 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.214613 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.214622 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.214635 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.214644 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.225433 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:13Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.225621 4941 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.227091 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.227126 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.227136 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.227150 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.227158 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.329315 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.329361 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.329379 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.329400 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.329415 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.435916 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.436600 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.436689 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.436775 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.436893 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.466945 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.466963 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.467069 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.467235 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.467369 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.467409 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.467730 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.467788 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.468027 4941 scope.go:117] "RemoveContainer" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" Feb 27 19:36:13 crc kubenswrapper[4941]: E0227 19:36:13.468164 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.539821 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.540057 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.540141 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.540226 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.540314 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.642598 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.642632 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.642641 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.642655 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.642667 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.745003 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.745028 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.745035 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.745048 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.745057 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.846823 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.846867 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.846878 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.846892 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.846903 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.949198 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.949240 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.949251 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.949266 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:13 crc kubenswrapper[4941]: I0227 19:36:13.949279 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:13Z","lastTransitionTime":"2026-02-27T19:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.051385 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.051423 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.051431 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.051446 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.051457 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.154049 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.154341 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.154350 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.154364 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.154373 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.256941 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.256981 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.256995 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.257012 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.257024 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.358967 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.359013 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.359027 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.359054 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.359078 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.461835 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.461873 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.461882 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.461895 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.461905 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.564703 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.564750 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.564763 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.564780 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.564790 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.667359 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.667410 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.667423 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.667443 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.667458 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.770141 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.770253 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.770578 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.770603 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.770616 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.873142 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.873196 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.873223 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.873242 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.873252 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.975437 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.975498 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.975511 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.975526 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:14 crc kubenswrapper[4941]: I0227 19:36:14.975539 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:14Z","lastTransitionTime":"2026-02-27T19:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.078181 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.078214 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.078225 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.078237 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.078258 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.179954 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.179990 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.179999 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.180012 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.180021 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.203762 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.203922 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:36:31.203902021 +0000 UTC m=+109.465042451 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.282097 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.282140 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.282149 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.282163 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.282172 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.304450 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.304620 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.304639 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.304651 4941 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.304699 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:31.304681893 +0000 UTC m=+109.565822313 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.385054 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.385089 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.385099 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.385113 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.385129 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.405000 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.405072 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.405145 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.405186 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405238 4941 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405265 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405321 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405335 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:31.40531217 +0000 UTC m=+109.666452590 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405341 4941 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405348 4941 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405387 4941 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405413 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:31.405390903 +0000 UTC m=+109.666531353 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405459 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:31.405428204 +0000 UTC m=+109.666568664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.405563 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs podName:68a8b3ac-f7b7-412b-8c30-96c44ba947c9 nodeName:}" failed. No retries permitted until 2026-02-27 19:36:31.405538277 +0000 UTC m=+109.666678767 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs") pod "network-metrics-daemon-mvmp7" (UID: "68a8b3ac-f7b7-412b-8c30-96c44ba947c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.466549 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.466575 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.466566 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.466549 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.466674 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.466779 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.466859 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:15 crc kubenswrapper[4941]: E0227 19:36:15.466926 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.487308 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.487346 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.487354 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.487369 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.487378 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.589716 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.589773 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.589789 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.589812 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.589832 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.691218 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.691246 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.691254 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.691267 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.691276 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.794090 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.794126 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.794154 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.794168 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.794180 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.896565 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.896598 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.896607 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.896619 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.896628 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.999166 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.999243 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.999254 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.999268 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:15 crc kubenswrapper[4941]: I0227 19:36:15.999293 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:15Z","lastTransitionTime":"2026-02-27T19:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.102459 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.102582 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.102613 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.102646 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.102670 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.205399 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.205701 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.205790 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.205888 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.205992 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.307897 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.307932 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.307944 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.307960 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.307971 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.409873 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.409909 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.409919 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.409936 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.409947 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.512444 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.512547 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.512570 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.512597 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.512618 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.614459 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.614512 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.614522 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.614539 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.614552 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.716293 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.716545 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.716635 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.716734 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.716811 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.819595 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.819643 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.819653 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.819671 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.819682 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.898359 4941 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.921358 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.921400 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.921411 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.921425 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:16 crc kubenswrapper[4941]: I0227 19:36:16.921435 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:16Z","lastTransitionTime":"2026-02-27T19:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.023561 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.023817 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.023878 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.023937 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.023997 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.126918 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.126962 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.126974 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.127039 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.127055 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.229572 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.229614 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.229625 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.229644 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.229656 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.332012 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.332603 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.332753 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.332897 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.333023 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.436488 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.436557 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.436569 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.436584 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.436595 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.466871 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.466925 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:17 crc kubenswrapper[4941]: E0227 19:36:17.466999 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:17 crc kubenswrapper[4941]: E0227 19:36:17.467127 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.467188 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:17 crc kubenswrapper[4941]: E0227 19:36:17.467271 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.467368 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:17 crc kubenswrapper[4941]: E0227 19:36:17.467560 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.539667 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.539972 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.540329 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.540721 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.540964 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.643605 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.643650 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.643662 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.643678 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.643690 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.746223 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.746272 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.746287 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.746306 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.746320 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.849020 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.849063 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.849074 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.849090 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.849102 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.951452 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.951511 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.951519 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.951533 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:17 crc kubenswrapper[4941]: I0227 19:36:17.951541 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:17Z","lastTransitionTime":"2026-02-27T19:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.054125 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.057545 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.057791 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.057947 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.057967 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.160183 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.160764 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.160886 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.160981 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.161072 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.263374 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.263618 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.263685 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.263747 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.263806 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.366670 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.366735 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.366752 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.366778 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.366794 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.469706 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.469775 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.469800 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.469829 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.469850 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.571679 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.571983 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.572115 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.572206 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.572282 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.674360 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.674398 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.674410 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.674424 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.674435 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.777090 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.777138 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.777149 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.777169 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.777183 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.880013 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.880057 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.880068 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.880087 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.880098 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.983019 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.983054 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.983063 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.983079 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:18 crc kubenswrapper[4941]: I0227 19:36:18.983096 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:18Z","lastTransitionTime":"2026-02-27T19:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.085428 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.085462 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.085491 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.085507 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.085519 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.187428 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.187482 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.187492 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.187508 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.187517 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.289713 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.289756 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.289768 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.289784 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.289795 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.391778 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.391817 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.391825 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.391838 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.391847 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.467032 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:19 crc kubenswrapper[4941]: E0227 19:36:19.467187 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.467649 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:19 crc kubenswrapper[4941]: E0227 19:36:19.467720 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.467769 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:19 crc kubenswrapper[4941]: E0227 19:36:19.467820 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.467862 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:19 crc kubenswrapper[4941]: E0227 19:36:19.467910 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.495129 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.495188 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.495205 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.495227 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.495243 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.597693 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.597756 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.597772 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.597833 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.597851 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.700316 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.700380 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.700398 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.700423 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.700441 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.803068 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.803101 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.803109 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.803123 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.803133 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.905538 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.905576 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.905587 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.905603 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:19 crc kubenswrapper[4941]: I0227 19:36:19.905612 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:19Z","lastTransitionTime":"2026-02-27T19:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.007972 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.008003 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.008011 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.008022 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.008030 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.110233 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.110257 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.110266 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.110281 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.110292 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.212758 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.212788 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.212795 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.212808 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.212816 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.314454 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.314501 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.314509 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.314521 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.314530 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.417717 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.417760 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.417776 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.417795 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.417810 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.520653 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.520737 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.520754 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.520778 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.520796 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.622798 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.622833 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.622881 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.622898 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.622912 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.725905 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.726198 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.726306 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.726425 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.726538 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.829540 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.829787 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.829897 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.830038 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.830129 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.933176 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.933240 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.933263 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.933291 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:20 crc kubenswrapper[4941]: I0227 19:36:20.933314 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:20Z","lastTransitionTime":"2026-02-27T19:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.035362 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.035439 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.035454 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.035493 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.035508 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.137331 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.137371 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.137380 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.137394 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.137404 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.239343 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.239374 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.239387 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.239402 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.239412 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.341392 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.341425 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.341436 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.341453 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.341463 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.443659 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.443702 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.443713 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.443730 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.443741 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.465999 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.465999 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:21 crc kubenswrapper[4941]: E0227 19:36:21.466377 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.466049 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.466027 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:21 crc kubenswrapper[4941]: E0227 19:36:21.466451 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:21 crc kubenswrapper[4941]: E0227 19:36:21.466542 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:21 crc kubenswrapper[4941]: E0227 19:36:21.466325 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.546338 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.546367 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.546376 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.546388 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.546396 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.648518 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.648561 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.648571 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.648586 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.648596 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.751038 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.751077 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.751090 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.751109 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.751123 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.853437 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.853495 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.853509 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.853521 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.853529 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.955577 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.955607 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.955615 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.955627 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:21 crc kubenswrapper[4941]: I0227 19:36:21.955636 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:21Z","lastTransitionTime":"2026-02-27T19:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.057874 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.057917 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.057929 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.057945 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.057957 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.160770 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.160809 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.160824 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.160842 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.160853 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.264090 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.264123 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.264132 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.264154 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.264163 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.366200 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.366241 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.366252 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.366265 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.366279 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.468764 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.468832 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.468843 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.468858 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.468867 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.479998 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.492893 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.505546 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.520612 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.536092 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.548059 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.571010 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.571054 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.571066 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.571083 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.571096 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.574384 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7e2cab8678783b6c9d46f04501c2a2e34b9b98a1fcc50fb3fe6fcd64947bc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:09Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 19:36:09.148190 6552 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 19:36:09.148501 6552 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 19:36:09.148714 6552 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 19:36:09.148763 6552 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 19:36:09.148773 6552 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 19:36:09.148785 6552 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0227 19:36:09.148790 6552 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 19:36:09.148803 6552 factory.go:656] Stopping watch factory\\\\nI0227 19:36:09.148807 6552 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 19:36:09.148814 6552 ovnkube.go:599] Stopped ovnkube\\\\nI0227 19:36:09.148822 6552 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 19:36:09.148832 6552 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 19:36:09.148845 6552 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:10Z\\\",\\\"message\\\":\\\"es.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0227 19:36:10.719167 6902 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inf\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.590324 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.598458 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.608772 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.621276 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.631071 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.642878 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.658532 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.671357 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.673062 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.673097 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.673106 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.673119 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.673128 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.682450 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:22Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.775683 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.775717 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.775727 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.775742 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.775753 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.877862 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.877898 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.877913 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.877928 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.877937 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.979708 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.979738 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.979746 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.979759 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:22 crc kubenswrapper[4941]: I0227 19:36:22.979768 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:22Z","lastTransitionTime":"2026-02-27T19:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.081691 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.081730 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.081737 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.081769 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.081779 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.184094 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.184164 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.184182 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.184203 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.184220 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.287517 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.287588 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.287607 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.287630 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.287647 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.334224 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.334276 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.334304 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.334327 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.334342 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.355140 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:23Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.359242 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.359294 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.359304 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.359319 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.359347 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.371778 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:23Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.375389 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.375421 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.375433 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.375448 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.375459 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.387634 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:23Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.391328 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.391369 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.391380 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.391396 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.391408 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.406915 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:23Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.414356 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.414584 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.414610 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.414629 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.414646 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.429696 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:23Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.429807 4941 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.431338 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.431378 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.431389 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.431404 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.431416 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.466841 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.466983 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.467302 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.467348 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.467562 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.467372 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.467632 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:23 crc kubenswrapper[4941]: E0227 19:36:23.467502 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.533810 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.533844 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.533854 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.533870 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.533880 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.636741 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.637045 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.637177 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.637347 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.637519 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.740488 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.740525 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.740536 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.740554 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.740566 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.842740 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.842981 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.843058 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.843131 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.843204 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.944729 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.945063 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.945254 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.945402 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:23 crc kubenswrapper[4941]: I0227 19:36:23.945580 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:23Z","lastTransitionTime":"2026-02-27T19:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.047890 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.047927 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.047938 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.047955 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.047966 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.150338 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.150366 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.150374 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.150387 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.150395 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.253280 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.253316 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.253330 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.253345 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.253356 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.355811 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.355865 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.355883 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.355905 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.355922 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.458761 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.458798 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.458813 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.458834 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.458849 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.561849 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.561879 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.561889 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.561902 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.561912 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.664386 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.664433 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.664450 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.664523 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.664566 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.766502 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.766543 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.766555 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.766572 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.766582 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.869186 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.869245 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.869261 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.869282 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.869296 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.971452 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.971503 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.971511 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.971526 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:24 crc kubenswrapper[4941]: I0227 19:36:24.971536 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:24Z","lastTransitionTime":"2026-02-27T19:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.074410 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.074981 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.075233 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.075450 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.075703 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.178495 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.178537 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.178549 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.178567 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.178582 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.281349 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.281685 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.281774 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.281887 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.281971 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.384571 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.385132 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.385211 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.385304 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.385409 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.466005 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.466028 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:25 crc kubenswrapper[4941]: E0227 19:36:25.466106 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.466234 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:25 crc kubenswrapper[4941]: E0227 19:36:25.466393 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.466585 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:25 crc kubenswrapper[4941]: E0227 19:36:25.466621 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:25 crc kubenswrapper[4941]: E0227 19:36:25.466975 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.488131 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.488495 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.488608 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.488716 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.488837 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.592060 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.592115 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.592128 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.592147 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.592165 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.694541 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.694876 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.695037 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.695188 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.695332 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.798238 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.798278 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.798291 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.798307 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.798321 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.901138 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.901179 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.901188 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.901202 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:25 crc kubenswrapper[4941]: I0227 19:36:25.901214 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:25Z","lastTransitionTime":"2026-02-27T19:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.002937 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.003002 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.003020 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.003043 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.003063 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.105710 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.105768 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.105779 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.105801 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.105817 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.207834 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.207873 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.207885 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.207903 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.207914 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.310747 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.310783 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.310792 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.310806 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.310815 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.413147 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.413182 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.413190 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.413202 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.413211 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.467278 4941 scope.go:117] "RemoveContainer" containerID="14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.483562 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.495809 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.507585 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.554884 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.555257 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.555276 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.555300 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.555317 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.567861 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.587120 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.608949 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.628735 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:10Z\\\",\\\"message\\\":\\\"es.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0227 19:36:10.719167 6902 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inf\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.644502 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.657069 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.658228 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.658255 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.658286 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.658301 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.658309 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.671736 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.682206 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.695399 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.711026 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.724868 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.737193 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.749066 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.761014 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.761051 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.761060 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.761075 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.761085 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.863301 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.863355 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.863367 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.863387 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.863424 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.920307 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/1.log" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.925059 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.925463 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.942848 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:10Z\\\",\\\"message\\\":\\\"es.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0227 19:36:10.719167 6902 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inf\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.959888 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.966063 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.966095 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.966103 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.966116 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.966125 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:26Z","lastTransitionTime":"2026-02-27T19:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.970046 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.981882 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:26 crc kubenswrapper[4941]: I0227 19:36:26.991949 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:26Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.003179 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.016522 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.026655 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.036752 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.045556 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.054390 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.062006 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.067677 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.067732 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.067743 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.067759 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.067771 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.070559 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.080840 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.089391 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.099483 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.169836 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.169860 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.169868 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.169882 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.169890 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.271947 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.271988 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.272029 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.272046 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.272056 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.374205 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.374237 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.374247 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.374260 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.374269 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.467058 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.467099 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.467103 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.467352 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:27 crc kubenswrapper[4941]: E0227 19:36:27.467411 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:27 crc kubenswrapper[4941]: E0227 19:36:27.467509 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:27 crc kubenswrapper[4941]: E0227 19:36:27.467590 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:27 crc kubenswrapper[4941]: E0227 19:36:27.467694 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.468448 4941 scope.go:117] "RemoveContainer" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.478619 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.478658 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.478671 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.478688 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.478700 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.492601 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.581137 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.581165 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.581174 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.581187 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.581195 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.698854 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.698893 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.698904 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.698920 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.698941 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.801004 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.801040 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.801049 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.801063 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.801074 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.906835 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.906882 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.906893 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.906908 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.906918 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:27Z","lastTransitionTime":"2026-02-27T19:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.931035 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/2.log" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.931607 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/1.log" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.934594 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098" exitCode=1 Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.934713 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.934871 4941 scope.go:117] "RemoveContainer" containerID="14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.935562 4941 scope.go:117] "RemoveContainer" containerID="2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098" Feb 27 19:36:27 crc kubenswrapper[4941]: E0227 19:36:27.935742 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.945949 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.949444 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9"} Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.950076 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.950439 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.960754 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.976557 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:27 crc kubenswrapper[4941]: I0227 19:36:27.989266 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.001731 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.009228 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.009271 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.009285 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.009302 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.009314 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.014445 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.035635 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.050272 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.065203 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.082854 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.097338 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.107773 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.111447 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.111489 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.111497 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.111510 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.111520 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.121639 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.135321 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.145698 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.162601 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:10Z\\\",\\\"message\\\":\\\"es.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0227 19:36:10.719167 6902 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inf\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.176390 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.189677 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.202924 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.214037 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.214077 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.214088 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.214122 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.214136 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.217110 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.235616 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f95f10e329e3f72c28975311627dd8ac844033f340fff2070792cba577533f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:10Z\\\",\\\"message\\\":\\\"es.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0227 19:36:10.719167 6902 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inf\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.251734 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.262299 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.280025 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.292229 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.305184 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.314686 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.316601 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.316636 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.316646 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.316664 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.316676 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.328030 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.337144 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.349204 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.363289 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.375382 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.397185 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.413161 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.419917 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.420111 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.420185 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.420266 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.420309 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.523951 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.524009 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.524023 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.524040 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.524051 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.628373 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.628420 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.628430 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.628448 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.628458 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.731724 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.731763 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.731777 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.731819 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.731834 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.835588 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.835651 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.835668 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.835689 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.835703 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.938110 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.938144 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.938155 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.938167 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.938175 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:28Z","lastTransitionTime":"2026-02-27T19:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.954013 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/2.log" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.957621 4941 scope.go:117] "RemoveContainer" containerID="2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098" Feb 27 19:36:28 crc kubenswrapper[4941]: E0227 19:36:28.957795 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.970120 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.983340 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:28 crc kubenswrapper[4941]: I0227 19:36:28.994022 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:28Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.004309 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.022276 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.035514 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.040155 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.040216 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.040230 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.040247 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.040259 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.047527 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.056590 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.071630 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.082989 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.096292 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.107259 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.127982 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.140121 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.142572 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.142637 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.142648 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.142665 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.142695 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.153913 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.166940 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.175763 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:29Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.245293 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.245342 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.245353 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.245369 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.245387 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.347770 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.347818 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.347830 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.347848 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.347860 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.449938 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.449986 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.449999 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.450017 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.450036 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.466437 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.466497 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.466506 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.466571 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:29 crc kubenswrapper[4941]: E0227 19:36:29.466564 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:29 crc kubenswrapper[4941]: E0227 19:36:29.466655 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:29 crc kubenswrapper[4941]: E0227 19:36:29.466715 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:29 crc kubenswrapper[4941]: E0227 19:36:29.466782 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.552370 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.552408 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.552417 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.552430 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.552439 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.654921 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.654963 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.654972 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.654986 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.654996 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.757769 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.757804 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.757813 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.757827 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.757838 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.861112 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.861182 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.861196 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.861213 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.861224 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.963761 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.963832 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.963845 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.963860 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:29 crc kubenswrapper[4941]: I0227 19:36:29.963872 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:29Z","lastTransitionTime":"2026-02-27T19:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.065881 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.065930 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.065940 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.065956 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.065968 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.168802 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.168891 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.168902 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.168917 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.168927 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.272136 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.272201 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.272217 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.272241 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.272258 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.374792 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.374857 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.374874 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.374895 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.374907 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.477035 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.477428 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.477445 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.477504 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.477531 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.580933 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.581000 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.581018 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.581041 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.581060 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.685590 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.685674 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.685709 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.685739 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.685762 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.787916 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.787947 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.787957 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.787973 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.787982 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.890500 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.890750 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.890861 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.890980 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.891075 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.993882 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.994798 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.994813 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.994829 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:30 crc kubenswrapper[4941]: I0227 19:36:30.994842 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:30Z","lastTransitionTime":"2026-02-27T19:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.097207 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.097235 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.097243 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.097255 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.097263 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.199219 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.199450 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.199601 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.199812 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.199982 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.230262 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.230425 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:37:03.230398504 +0000 UTC m=+141.491538924 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.302689 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.302726 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.302736 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.302751 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.302761 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.331417 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.331571 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.331589 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.331600 4941 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.331655 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:03.33164079 +0000 UTC m=+141.592781210 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.405448 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.405523 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.405548 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.405571 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.405586 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.432341 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.432420 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.432505 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.432550 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.432707 4941 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.432794 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:03.432768862 +0000 UTC m=+141.693909322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.433203 4941 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.433227 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.433267 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.433291 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:03.433264907 +0000 UTC m=+141.694405367 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.433292 4941 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.433366 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:03.4333465 +0000 UTC m=+141.694486960 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.433390 4941 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.433456 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs podName:68a8b3ac-f7b7-412b-8c30-96c44ba947c9 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:03.433433772 +0000 UTC m=+141.694574222 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs") pod "network-metrics-daemon-mvmp7" (UID: "68a8b3ac-f7b7-412b-8c30-96c44ba947c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.466222 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.466346 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.466400 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.466439 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.466491 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.466532 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.466573 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:31 crc kubenswrapper[4941]: E0227 19:36:31.466613 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.507991 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.508486 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.508587 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.508678 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.508761 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.611188 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.611219 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.611227 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.611239 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.611247 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.713956 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.714025 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.714042 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.714066 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.714085 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.817447 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.817552 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.817567 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.817588 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.817604 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.919987 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.920057 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.920070 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.920087 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:31 crc kubenswrapper[4941]: I0227 19:36:31.920101 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:31Z","lastTransitionTime":"2026-02-27T19:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.022392 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.022444 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.022459 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.022496 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.022512 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.126011 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.126062 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.126077 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.126105 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.126121 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.229861 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.229963 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.229988 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.230016 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.230038 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.333092 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.333161 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.333180 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.333207 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.333224 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.435185 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.435220 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.435228 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.435240 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.435248 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.489857 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.504798 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.518364 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.537824 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.537847 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.537855 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.537871 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.537883 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.552856 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.570009 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.586893 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.599206 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.620282 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.633264 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.641122 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.641155 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.641165 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.641180 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.641191 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.645051 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.658400 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.671553 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.695560 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.706948 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.720091 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.734324 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.743286 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.743309 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.743317 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.743330 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.743338 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.748332 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:32Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.845923 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.845965 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.845976 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.845995 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.846006 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.948427 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.948496 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.948509 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.948526 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:32 crc kubenswrapper[4941]: I0227 19:36:32.948537 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:32Z","lastTransitionTime":"2026-02-27T19:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.051162 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.051200 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.051212 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.051228 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.051240 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.153129 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.153155 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.153162 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.153178 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.153186 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.255358 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.255404 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.255413 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.255427 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.255440 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.358245 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.358290 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.358302 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.358322 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.358335 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.461511 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.461585 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.461607 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.461630 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.461646 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.465988 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.466021 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.466068 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.465997 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.466160 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.466319 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.466410 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.466582 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.564643 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.564700 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.564712 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.564727 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.564737 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.608191 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.608284 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.608300 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.608323 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.608333 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.621970 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:33Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.625194 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.625230 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.625238 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.625254 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.625264 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.639137 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:33Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.642725 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.642767 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.642779 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.642796 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.642808 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.656240 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:33Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.659671 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.659715 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.659729 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.659747 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.659761 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.670777 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:33Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.674095 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.674193 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.674266 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.674327 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.674382 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.686685 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:33Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:33 crc kubenswrapper[4941]: E0227 19:36:33.687019 4941 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.688511 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.688612 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.688677 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.688738 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.688807 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.790931 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.790988 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.791005 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.791028 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.791048 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.893372 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.893413 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.893423 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.893436 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.893445 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.996106 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.996331 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.996448 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.996556 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:33 crc kubenswrapper[4941]: I0227 19:36:33.996639 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:33Z","lastTransitionTime":"2026-02-27T19:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.098061 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.098271 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.098333 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.098430 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.098516 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.201584 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.201621 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.201630 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.201644 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.201652 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.304121 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.304355 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.304423 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.304566 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.304683 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.408104 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.408463 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.408645 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.408828 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.408979 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.511972 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.512936 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.513034 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.513117 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.513189 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.615974 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.616056 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.616077 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.616148 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.616176 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.718644 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.718696 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.718709 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.718725 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.718737 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.826899 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.826941 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.826953 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.826970 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.826984 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.929892 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.929929 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.929941 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.929956 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:34 crc kubenswrapper[4941]: I0227 19:36:34.929968 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:34Z","lastTransitionTime":"2026-02-27T19:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.032674 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.032727 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.032745 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.032763 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.032774 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.136717 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.136784 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.136850 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.136876 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.136892 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.239584 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.239621 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.239629 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.239642 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.239651 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.342849 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.342930 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.342950 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.342974 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.342992 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.446499 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.446606 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.446622 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.446638 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.446650 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.466245 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.466275 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.466280 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:35 crc kubenswrapper[4941]: E0227 19:36:35.466381 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.466421 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:35 crc kubenswrapper[4941]: E0227 19:36:35.466617 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:35 crc kubenswrapper[4941]: E0227 19:36:35.466686 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:35 crc kubenswrapper[4941]: E0227 19:36:35.466993 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.550354 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.550446 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.550463 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.550507 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.550534 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.654097 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.654167 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.654184 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.654208 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.654226 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.757254 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.757301 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.757311 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.757328 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.757340 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.861284 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.861376 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.861402 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.861437 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.861518 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.965667 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.965730 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.965752 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.965776 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:35 crc kubenswrapper[4941]: I0227 19:36:35.965794 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:35Z","lastTransitionTime":"2026-02-27T19:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.069415 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.069500 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.069516 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.069536 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.069549 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.172056 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.172121 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.172134 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.172153 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.172167 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.275742 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.275818 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.275837 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.275864 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.275883 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.379159 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.379202 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.379213 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.379230 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.379239 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.481658 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.481731 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.481749 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.481776 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.481795 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.584396 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.584501 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.584519 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.584543 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.584560 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.687629 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.687712 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.687735 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.687763 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.687784 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.790932 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.791011 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.791028 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.791053 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.791071 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.893505 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.893784 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.893827 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.893858 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.893881 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.996124 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.996168 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.996183 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.996199 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:36 crc kubenswrapper[4941]: I0227 19:36:36.996211 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:36Z","lastTransitionTime":"2026-02-27T19:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.098450 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.098546 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.098569 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.098595 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.098618 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.202181 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.202265 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.202289 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.202319 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.202342 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.306177 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.306251 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.306267 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.306289 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.306306 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.408776 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.408864 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.408882 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.408965 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.408982 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.466158 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.466171 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:37 crc kubenswrapper[4941]: E0227 19:36:37.466294 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.466186 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:37 crc kubenswrapper[4941]: E0227 19:36:37.466396 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.466169 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:37 crc kubenswrapper[4941]: E0227 19:36:37.466491 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:37 crc kubenswrapper[4941]: E0227 19:36:37.466539 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.511613 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.511648 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.511657 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.511673 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.511682 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.614320 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.614362 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.614376 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.614395 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.614409 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.716963 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.717032 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.717046 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.717062 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.717078 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.819506 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.819577 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.819599 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.819617 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.819629 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.922590 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.922663 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.922688 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.922724 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:37 crc kubenswrapper[4941]: I0227 19:36:37.922746 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:37Z","lastTransitionTime":"2026-02-27T19:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.025591 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.025643 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.025661 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.025685 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.025705 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.128218 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.128271 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.128289 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.128312 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.128328 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.230940 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.230992 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.231009 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.231033 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.231057 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.332834 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.332861 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.332870 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.332881 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.332889 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.435733 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.435787 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.435805 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.435829 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.435847 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.538543 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.538577 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.538586 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.538598 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.538607 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.641778 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.641811 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.641818 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.641830 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.641839 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.744229 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.744271 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.744282 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.744299 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.744311 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.846972 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.847004 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.847012 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.847026 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.847035 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.949511 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.949562 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.949573 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.949591 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:38 crc kubenswrapper[4941]: I0227 19:36:38.949602 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:38Z","lastTransitionTime":"2026-02-27T19:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.052103 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.052160 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.052177 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.052196 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.052213 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.154226 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.154301 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.154338 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.154369 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.154392 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.256577 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.256612 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.256623 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.256639 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.256649 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.360020 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.360088 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.360106 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.360129 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.360149 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.463629 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.463697 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.463734 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.463769 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.463788 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.466920 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.466941 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.467049 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.467101 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:39 crc kubenswrapper[4941]: E0227 19:36:39.467299 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:39 crc kubenswrapper[4941]: E0227 19:36:39.467431 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:39 crc kubenswrapper[4941]: E0227 19:36:39.467571 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:39 crc kubenswrapper[4941]: E0227 19:36:39.467703 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.566080 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.566147 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.566170 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.566197 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.566221 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.667985 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.668062 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.668072 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.668151 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.668171 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.770386 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.770434 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.770446 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.770463 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.770505 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.873217 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.873269 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.873280 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.873296 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.873308 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.976268 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.976348 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.976366 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.976394 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:39 crc kubenswrapper[4941]: I0227 19:36:39.976412 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:39Z","lastTransitionTime":"2026-02-27T19:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.079199 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.079238 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.079249 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.079264 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.079276 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.181331 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.181369 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.181382 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.181399 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.181427 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.284114 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.284175 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.284192 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.284216 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.284233 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.386601 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.386645 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.386658 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.386676 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.386689 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.489454 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.489509 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.489520 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.489538 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.489550 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.592065 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.592119 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.592131 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.592145 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.592157 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.694877 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.694930 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.694947 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.694972 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.695003 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.801566 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.801722 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.801757 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.801782 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.802220 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.906727 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.907040 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.907077 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.907134 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:40 crc kubenswrapper[4941]: I0227 19:36:40.907153 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:40Z","lastTransitionTime":"2026-02-27T19:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.009218 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.009254 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.009269 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.009290 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.009301 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.112081 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.112171 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.112189 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.112217 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.112239 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.215369 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.215411 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.215423 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.215439 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.215450 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.318570 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.318634 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.318651 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.318674 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.318692 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.420848 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.420923 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.420940 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.420962 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.420980 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.466566 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.466611 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:41 crc kubenswrapper[4941]: E0227 19:36:41.466686 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.466566 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:41 crc kubenswrapper[4941]: E0227 19:36:41.466870 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:41 crc kubenswrapper[4941]: E0227 19:36:41.466906 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.466671 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:41 crc kubenswrapper[4941]: E0227 19:36:41.467118 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.524507 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.524564 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.524586 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.524609 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.524628 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.627872 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.627926 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.627944 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.627967 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.627986 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.730575 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.730625 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.730647 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.730662 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.730673 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.833850 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.833893 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.833904 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.833921 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.833933 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.936750 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.936811 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.936829 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.936855 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:41 crc kubenswrapper[4941]: I0227 19:36:41.936873 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:41Z","lastTransitionTime":"2026-02-27T19:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.039565 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.039639 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.039662 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.039691 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.039712 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:42Z","lastTransitionTime":"2026-02-27T19:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.143002 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.143087 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.143112 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.143134 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.143150 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:42Z","lastTransitionTime":"2026-02-27T19:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.246661 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.246712 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.246729 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.246752 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.246769 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:42Z","lastTransitionTime":"2026-02-27T19:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.350551 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.350621 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.350638 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.350665 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.350685 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:42Z","lastTransitionTime":"2026-02-27T19:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:42 crc kubenswrapper[4941]: E0227 19:36:42.451310 4941 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.471592 4941 scope.go:117] "RemoveContainer" containerID="2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098" Feb 27 19:36:42 crc kubenswrapper[4941]: E0227 19:36:42.472768 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.491653 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.511216 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.531852 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.548602 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.566918 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.582744 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: E0227 19:36:42.583109 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.598445 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.610786 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.634798 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.653791 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.670453 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.681838 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.701009 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.717503 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.731538 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.746209 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:42 crc kubenswrapper[4941]: I0227 19:36:42.760985 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:42Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.466317 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.466377 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.466426 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.466388 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:43 crc kubenswrapper[4941]: E0227 19:36:43.466511 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:43 crc kubenswrapper[4941]: E0227 19:36:43.466650 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:43 crc kubenswrapper[4941]: E0227 19:36:43.466825 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:43 crc kubenswrapper[4941]: E0227 19:36:43.466964 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.984155 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.984212 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.984224 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.984243 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:43 crc kubenswrapper[4941]: I0227 19:36:43.984255 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:43Z","lastTransitionTime":"2026-02-27T19:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:43 crc kubenswrapper[4941]: E0227 19:36:43.996187 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:43Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.001029 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.001087 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.001114 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.001135 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.001150 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:44Z","lastTransitionTime":"2026-02-27T19:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:44 crc kubenswrapper[4941]: E0227 19:36:44.014431 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:44Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.018306 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.018345 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.018357 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.018375 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.018386 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:44Z","lastTransitionTime":"2026-02-27T19:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:44 crc kubenswrapper[4941]: E0227 19:36:44.030637 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:44Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.034015 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.034104 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.034124 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.034151 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.034168 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:44Z","lastTransitionTime":"2026-02-27T19:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:44 crc kubenswrapper[4941]: E0227 19:36:44.048633 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:44Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.052597 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.052650 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.052667 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.052691 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.052710 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:44Z","lastTransitionTime":"2026-02-27T19:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:44 crc kubenswrapper[4941]: E0227 19:36:44.067658 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:44Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:44 crc kubenswrapper[4941]: E0227 19:36:44.067799 4941 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 19:36:44 crc kubenswrapper[4941]: I0227 19:36:44.479382 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.056016 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.070173 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.081927 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.097700 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.110747 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.126305 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.139145 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.158717 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.178118 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.192728 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd1960-6fc0-414c-bab6-919f76ddd56d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d9155642dc0db690c9c10b9d8ecc975c41bc299bf100d1bd8214622193292f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:07Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 19:34:44.526327 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 19:34:44.529571 1 observer_polling.go:159] Starting file observer\\\\nI0227 19:34:44.559575 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 19:34:44.565917 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 19:35:07.698595 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 19:35:07.698693 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40079c7faa02f3536954f56d3acb4a7232369b9cf5a066e81408eed72e4ae47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa6997db0064de34e76d7a6c9a6befd891e9997c9a91ab3eb7435ce3e20b5f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.205385 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.218886 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.234993 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.256815 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.272669 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.288923 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.306563 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.318610 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.342333 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:45Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.466266 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.466327 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.466391 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:45 crc kubenswrapper[4941]: I0227 19:36:45.466299 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:45 crc kubenswrapper[4941]: E0227 19:36:45.466434 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:45 crc kubenswrapper[4941]: E0227 19:36:45.466567 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:45 crc kubenswrapper[4941]: E0227 19:36:45.466648 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:45 crc kubenswrapper[4941]: E0227 19:36:45.466776 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.015490 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/0.log" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.015539 4941 generic.go:334] "Generic (PLEG): container finished" podID="16d71936-7f0d-4add-a17b-400840d5fce2" containerID="672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65" exitCode=1 Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.015570 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lt4bk" event={"ID":"16d71936-7f0d-4add-a17b-400840d5fce2","Type":"ContainerDied","Data":"672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65"} Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.015911 4941 scope.go:117] "RemoveContainer" containerID="672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.031074 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.044488 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.057929 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.075014 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.091758 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.109937 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.126578 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.142687 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:46Z\\\",\\\"message\\\":\\\"2026-02-27T19:36:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb\\\\n2026-02-27T19:36:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb to /host/opt/cni/bin/\\\\n2026-02-27T19:36:01Z [verbose] multus-daemon started\\\\n2026-02-27T19:36:01Z [verbose] Readiness Indicator file check\\\\n2026-02-27T19:36:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.155996 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.177825 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.197327 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.215864 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd1960-6fc0-414c-bab6-919f76ddd56d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d9155642dc0db690c9c10b9d8ecc975c41bc299bf100d1bd8214622193292f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:07Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 19:34:44.526327 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 19:34:44.529571 1 observer_polling.go:159] Starting file observer\\\\nI0227 19:34:44.559575 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 19:34:44.565917 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 19:35:07.698595 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 19:35:07.698693 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40079c7faa02f3536954f56d3acb4a7232369b9cf5a066e81408eed72e4ae47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa6997db0064de34e76d7a6c9a6befd891e9997c9a91ab3eb7435ce3e20b5f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.235445 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.249860 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.274677 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.291244 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.304522 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.318241 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:47Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.466041 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.466108 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.466160 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:47 crc kubenswrapper[4941]: I0227 19:36:47.466185 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:47 crc kubenswrapper[4941]: E0227 19:36:47.466368 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:47 crc kubenswrapper[4941]: E0227 19:36:47.466381 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:47 crc kubenswrapper[4941]: E0227 19:36:47.466459 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:47 crc kubenswrapper[4941]: E0227 19:36:47.466655 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:47 crc kubenswrapper[4941]: E0227 19:36:47.584595 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.020420 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/0.log" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.020500 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lt4bk" event={"ID":"16d71936-7f0d-4add-a17b-400840d5fce2","Type":"ContainerStarted","Data":"98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf"} Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.053311 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.073225 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.090564 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd1960-6fc0-414c-bab6-919f76ddd56d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d9155642dc0db690c9c10b9d8ecc975c41bc299bf100d1bd8214622193292f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:07Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 19:34:44.526327 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 19:34:44.529571 1 observer_polling.go:159] Starting file observer\\\\nI0227 19:34:44.559575 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 19:34:44.565917 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 19:35:07.698595 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 19:35:07.698693 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40079c7faa02f3536954f56d3acb4a7232369b9cf5a066e81408eed72e4ae47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa6997db0064de34e76d7a6c9a6befd891e9997c9a91ab3eb7435ce3e20b5f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.103422 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.119488 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:46Z\\\",\\\"message\\\":\\\"2026-02-27T19:36:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb\\\\n2026-02-27T19:36:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb to /host/opt/cni/bin/\\\\n2026-02-27T19:36:01Z [verbose] multus-daemon started\\\\n2026-02-27T19:36:01Z [verbose] Readiness Indicator file check\\\\n2026-02-27T19:36:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.131833 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.141926 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.155230 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.166182 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.176926 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.197345 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.211598 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.221991 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.230211 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.242113 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.255235 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.273010 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:48 crc kubenswrapper[4941]: I0227 19:36:48.284096 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:48Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:49 crc kubenswrapper[4941]: I0227 19:36:49.466927 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:49 crc kubenswrapper[4941]: I0227 19:36:49.466927 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:49 crc kubenswrapper[4941]: I0227 19:36:49.466952 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:49 crc kubenswrapper[4941]: I0227 19:36:49.467063 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:49 crc kubenswrapper[4941]: E0227 19:36:49.467204 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:49 crc kubenswrapper[4941]: E0227 19:36:49.467325 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:49 crc kubenswrapper[4941]: E0227 19:36:49.467448 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:49 crc kubenswrapper[4941]: E0227 19:36:49.467534 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:50 crc kubenswrapper[4941]: I0227 19:36:50.480067 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 19:36:51 crc kubenswrapper[4941]: I0227 19:36:51.466700 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:51 crc kubenswrapper[4941]: I0227 19:36:51.466787 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:51 crc kubenswrapper[4941]: E0227 19:36:51.466913 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:51 crc kubenswrapper[4941]: I0227 19:36:51.466747 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:51 crc kubenswrapper[4941]: E0227 19:36:51.467081 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:51 crc kubenswrapper[4941]: I0227 19:36:51.467145 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:51 crc kubenswrapper[4941]: E0227 19:36:51.467225 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:51 crc kubenswrapper[4941]: E0227 19:36:51.467300 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.487983 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.505344 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.527081 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.545642 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.567406 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.584516 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: E0227 19:36:52.585503 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.609052 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.622759 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.636962 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd1960-6fc0-414c-bab6-919f76ddd56d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d9155642dc0db690c9c10b9d8ecc975c41bc299bf100d1bd8214622193292f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:07Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 19:34:44.526327 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 19:34:44.529571 1 observer_polling.go:159] Starting file observer\\\\nI0227 19:34:44.559575 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 19:34:44.565917 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 19:35:07.698595 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 19:35:07.698693 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40079c7faa02f3536954f56d3acb4a7232369b9cf5a066e81408eed72e4ae47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa6997db0064de34e76d7a6c9a6befd891e9997c9a91ab3eb7435ce3e20b5f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.653088 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.670961 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:46Z\\\",\\\"message\\\":\\\"2026-02-27T19:36:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb\\\\n2026-02-27T19:36:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb to /host/opt/cni/bin/\\\\n2026-02-27T19:36:01Z [verbose] multus-daemon started\\\\n2026-02-27T19:36:01Z [verbose] Readiness Indicator file check\\\\n2026-02-27T19:36:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.685402 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.701087 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.711746 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.724539 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28dc5b62-99d9-49f1-8224-8b323a8c0501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b0f85b9cea9d0041d1782c27844e121dcc9b493d0007b246826df38298f43fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa830b245caf4e84d125dcb5b1d4abda047cd1b00bfc9010f45d57ac28cbb86e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a77558cd6a5ef63a2b2d6db5d9bcd243f504fe8a27412832477698d908e9d617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.741262 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.753302 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.764875 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:52 crc kubenswrapper[4941]: I0227 19:36:52.795749 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:52Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:53 crc kubenswrapper[4941]: I0227 19:36:53.466376 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:53 crc kubenswrapper[4941]: I0227 19:36:53.466443 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:53 crc kubenswrapper[4941]: I0227 19:36:53.466376 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:53 crc kubenswrapper[4941]: I0227 19:36:53.466460 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:53 crc kubenswrapper[4941]: E0227 19:36:53.466576 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:53 crc kubenswrapper[4941]: E0227 19:36:53.466676 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:53 crc kubenswrapper[4941]: E0227 19:36:53.466795 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:53 crc kubenswrapper[4941]: E0227 19:36:53.466887 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.341566 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.341603 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.341611 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.341624 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.341633 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:54Z","lastTransitionTime":"2026-02-27T19:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:54 crc kubenswrapper[4941]: E0227 19:36:54.352604 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:54Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.356851 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.356905 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.356923 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.356948 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.356965 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:54Z","lastTransitionTime":"2026-02-27T19:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:54 crc kubenswrapper[4941]: E0227 19:36:54.376675 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:54Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.381558 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.381611 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.381627 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.381647 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.381661 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:54Z","lastTransitionTime":"2026-02-27T19:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:54 crc kubenswrapper[4941]: E0227 19:36:54.397628 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:54Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.401600 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.401654 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.401673 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.401696 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.401712 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:54Z","lastTransitionTime":"2026-02-27T19:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:54 crc kubenswrapper[4941]: E0227 19:36:54.417588 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:54Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.421793 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.421846 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.421864 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.421884 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:36:54 crc kubenswrapper[4941]: I0227 19:36:54.421900 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:36:54Z","lastTransitionTime":"2026-02-27T19:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:36:54 crc kubenswrapper[4941]: E0227 19:36:54.439880 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:54Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:54 crc kubenswrapper[4941]: E0227 19:36:54.440029 4941 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 19:36:55 crc kubenswrapper[4941]: I0227 19:36:55.466456 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:55 crc kubenswrapper[4941]: I0227 19:36:55.466526 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:55 crc kubenswrapper[4941]: I0227 19:36:55.466458 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:55 crc kubenswrapper[4941]: E0227 19:36:55.466608 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:55 crc kubenswrapper[4941]: I0227 19:36:55.466456 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:55 crc kubenswrapper[4941]: E0227 19:36:55.466705 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:55 crc kubenswrapper[4941]: E0227 19:36:55.466774 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:55 crc kubenswrapper[4941]: E0227 19:36:55.466900 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:57 crc kubenswrapper[4941]: I0227 19:36:57.466782 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:57 crc kubenswrapper[4941]: I0227 19:36:57.466852 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:57 crc kubenswrapper[4941]: I0227 19:36:57.466866 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:57 crc kubenswrapper[4941]: E0227 19:36:57.466973 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:36:57 crc kubenswrapper[4941]: I0227 19:36:57.466813 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:57 crc kubenswrapper[4941]: E0227 19:36:57.467169 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:57 crc kubenswrapper[4941]: E0227 19:36:57.467674 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:57 crc kubenswrapper[4941]: E0227 19:36:57.467822 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:57 crc kubenswrapper[4941]: I0227 19:36:57.468110 4941 scope.go:117] "RemoveContainer" containerID="2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098" Feb 27 19:36:57 crc kubenswrapper[4941]: E0227 19:36:57.591405 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.062554 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/2.log" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.065722 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261"} Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.066141 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.083934 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.097745 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.118729 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.130837 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.154000 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.167207 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.195115 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.211723 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.228653 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd1960-6fc0-414c-bab6-919f76ddd56d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d9155642dc0db690c9c10b9d8ecc975c41bc299bf100d1bd8214622193292f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:07Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 19:34:44.526327 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 19:34:44.529571 1 observer_polling.go:159] Starting file observer\\\\nI0227 19:34:44.559575 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 19:34:44.565917 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 19:35:07.698595 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 19:35:07.698693 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40079c7faa02f3536954f56d3acb4a7232369b9cf5a066e81408eed72e4ae47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa6997db0064de34e76d7a6c9a6befd891e9997c9a91ab3eb7435ce3e20b5f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.243315 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.256705 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:46Z\\\",\\\"message\\\":\\\"2026-02-27T19:36:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb\\\\n2026-02-27T19:36:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb to /host/opt/cni/bin/\\\\n2026-02-27T19:36:01Z [verbose] multus-daemon started\\\\n2026-02-27T19:36:01Z [verbose] Readiness Indicator file check\\\\n2026-02-27T19:36:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.268387 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.280216 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.294786 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28dc5b62-99d9-49f1-8224-8b323a8c0501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b0f85b9cea9d0041d1782c27844e121dcc9b493d0007b246826df38298f43fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa830b245caf4e84d125dcb5b1d4abda047cd1b00bfc9010f45d57ac28cbb86e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a77558cd6a5ef63a2b2d6db5d9bcd243f504fe8a27412832477698d908e9d617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.307655 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.316964 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.327020 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.352212 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:58 crc kubenswrapper[4941]: I0227 19:36:58.368852 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:58Z is after 2025-08-24T17:21:41Z" Feb 27 19:36:59 crc kubenswrapper[4941]: I0227 19:36:59.467013 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:36:59 crc kubenswrapper[4941]: I0227 19:36:59.467013 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:36:59 crc kubenswrapper[4941]: I0227 19:36:59.467110 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:36:59 crc kubenswrapper[4941]: I0227 19:36:59.467150 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:36:59 crc kubenswrapper[4941]: E0227 19:36:59.467228 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:36:59 crc kubenswrapper[4941]: E0227 19:36:59.467424 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:36:59 crc kubenswrapper[4941]: E0227 19:36:59.467522 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:36:59 crc kubenswrapper[4941]: E0227 19:36:59.467640 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.075236 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/3.log" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.076622 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/2.log" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.080231 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261" exitCode=1 Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.080286 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261"} Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.080337 4941 scope.go:117] "RemoveContainer" containerID="2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.081827 4941 scope.go:117] "RemoveContainer" containerID="8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261" Feb 27 19:37:00 crc kubenswrapper[4941]: E0227 19:37:00.082151 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.098795 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.117153 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28dc5b62-99d9-49f1-8224-8b323a8c0501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b0f85b9cea9d0041d1782c27844e121dcc9b493d0007b246826df38298f43fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa830b245caf4e84d125dcb5b1d4abda047cd1b00bfc9010f45d57ac28cbb86e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a77558cd6a5ef63a2b2d6db5d9bcd243f504fe8a27412832477698d908e9d617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.137175 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.157154 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.172357 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.192584 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:59Z\\\",\\\"message\\\":\\\"ng{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 19:36:59.314257 7461 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nI0227 19:36:59.314266 7461 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0227 19:36:59.314277 7461 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0227 19:36:59.314283 7461 services_controller.go:454] Service openshift-apiserver/check-endpoints for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 19:36:59.314293 7461 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.212680 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.228371 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.249760 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.265765 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.280717 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.297029 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.310628 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.339844 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.359126 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.373135 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd1960-6fc0-414c-bab6-919f76ddd56d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d9155642dc0db690c9c10b9d8ecc975c41bc299bf100d1bd8214622193292f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:07Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 19:34:44.526327 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 19:34:44.529571 1 observer_polling.go:159] Starting file observer\\\\nI0227 19:34:44.559575 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 19:34:44.565917 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 19:35:07.698595 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 19:35:07.698693 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40079c7faa02f3536954f56d3acb4a7232369b9cf5a066e81408eed72e4ae47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa6997db0064de34e76d7a6c9a6befd891e9997c9a91ab3eb7435ce3e20b5f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.387199 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.402893 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:46Z\\\",\\\"message\\\":\\\"2026-02-27T19:36:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb\\\\n2026-02-27T19:36:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb to /host/opt/cni/bin/\\\\n2026-02-27T19:36:01Z [verbose] multus-daemon started\\\\n2026-02-27T19:36:01Z [verbose] Readiness Indicator file check\\\\n2026-02-27T19:36:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:00 crc kubenswrapper[4941]: I0227 19:37:00.416043 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:00Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:01 crc kubenswrapper[4941]: I0227 19:37:01.087324 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/3.log" Feb 27 19:37:01 crc kubenswrapper[4941]: I0227 19:37:01.466854 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:01 crc kubenswrapper[4941]: I0227 19:37:01.466959 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:01 crc kubenswrapper[4941]: I0227 19:37:01.467051 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:01 crc kubenswrapper[4941]: E0227 19:37:01.467055 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:01 crc kubenswrapper[4941]: E0227 19:37:01.467220 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:01 crc kubenswrapper[4941]: I0227 19:37:01.467234 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:01 crc kubenswrapper[4941]: E0227 19:37:01.467282 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:01 crc kubenswrapper[4941]: E0227 19:37:01.467344 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.487964 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9416b06693e49c91d3249cdd8175ddafd8400794b71f08f2ea5a3a3bb8cd9be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.506187 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d14d6b3c4d6f537e75933d4b3db92f12aa0c9e681534db8040ac097b95e4ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.522441 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd71801-391f-4811-992b-e7ca21d72fbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86114116b1d25c3c2d650574d617e8b8bba86d0b5cc5df7b6cca6aefb8f4d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccb648c011b9af125d6874d557133a4d3b885b233a87888f1051ef4fb1126c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk9db\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9fhpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.554765 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb476894-9c4f-487a-bfa6-5babb5243c0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e30f42a813b89a558b1fb1684584ed5fb983c7df88a982b3a56a074cb139098\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:27Z\\\",\\\"message\\\":\\\"[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 19:36:27.777282 7080 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:36:27Z is after 2025-08-24T17:21:41Z]\\\\nI0227 19:36:27.777393 7080 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:59Z\\\",\\\"message\\\":\\\"ng{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 19:36:59.314257 7461 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nI0227 19:36:59.314266 7461 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0227 19:36:59.314277 7461 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0227 19:36:59.314283 7461 services_controller.go:454] Service openshift-apiserver/check-endpoints for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 19:36:59.314293 7461 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rxgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v74b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.580214 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52622e46-d1a9-4b02-8ed1-8130f184b10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b6bcc56a9aa18498b0ddf662888a1e6fdfbe14aa345ece793c53f888f117f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6530e8534e191ba9e7867d130a5419b3449c9ea57183b1571244fb49eda47641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f36eb8e933d73dc088aaee6877c83269d527816bf24fe5cb4f218e89fe5eec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f7172d89218aa59165c38883257a469393316bf0eff258b72523dcb4d7d58cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83c22c1ff2543d4698fe9299814b499e3ea505caf1f9d678e977552df3cef97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eb5141111f8e6e203777ce1836930bb5dcaaffc58922f1b5c8007ff491e3a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c17f7cf78bb5ce09802bcb8e50c9601bdc7388b9b3e754761932adf39004559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:36:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2jq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wbhlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: E0227 19:37:02.592355 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.593239 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18f36548-e08d-4fbe-a2cd-7d924d6f1a64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e771ded2db606a44560002975964bb1bd5c1e2033f6bf1e8ea6ba1990095c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a428f05df305249976efdcc3844711055a9b295846f4798b4addbc7b725405e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.605696 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28dc5b62-99d9-49f1-8224-8b323a8c0501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b0f85b9cea9d0041d1782c27844e121dcc9b493d0007b246826df38298f43fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa830b245caf4e84d125dcb5b1d4abda047cd1b00bfc9010f45d57ac28cbb86e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a77558cd6a5ef63a2b2d6db5d9bcd243f504fe8a27412832477698d908e9d617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6654b59efc1e9e84a2bf4f5116a28c0946bca3d5b5b27e8221e7462ba7cdca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.620126 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.629828 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c0b99f5-8424-4e74-a332-f6dff828c48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5288e6fb1bbb8a00bdd73609e19062a760bb5231f183031b315a21f743103c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hj7qr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.641547 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.650700 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4pb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mvmp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.669562 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.679547 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8pmzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47bb43f7-4b26-4fea-9b06-b485aaff253f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f31d7744a7ee156b0a794f493a1b631f6f6481a68495c0fca9018efbd5137f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8pmzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.695245 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd1960-6fc0-414c-bab6-919f76ddd56d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d9155642dc0db690c9c10b9d8ecc975c41bc299bf100d1bd8214622193292f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c20307a4e74dc0fe477bb0ad82572f0b009441bb3b714bc2528711e2a6be0d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:07Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 19:34:44.526327 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 19:34:44.529571 1 observer_polling.go:159] Starting file observer\\\\nI0227 19:34:44.559575 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 19:34:44.565917 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 19:35:07.698595 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 19:35:07.698693 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:35:06Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40079c7faa02f3536954f56d3acb4a7232369b9cf5a066e81408eed72e4ae47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa6997db0064de34e76d7a6c9a6befd891e9997c9a91ab3eb7435ce3e20b5f58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.710054 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c58d1e6e1afdb8bb7e1a3a93ed83626de5e9f2948f159c16490b3399ef4e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e224e9fe695f49f486ce3e216390bb4e3313b3f8e8dcd87fbcc72cccde15416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.725182 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lt4bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16d71936-7f0d-4add-a17b-400840d5fce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T19:36:46Z\\\",\\\"message\\\":\\\"2026-02-27T19:36:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb\\\\n2026-02-27T19:36:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_777f9f81-d167-4f29-b8db-53a6326f75cb to /host/opt/cni/bin/\\\\n2026-02-27T19:36:01Z [verbose] multus-daemon started\\\\n2026-02-27T19:36:01Z [verbose] Readiness Indicator file check\\\\n2026-02-27T19:36:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lt4bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.736268 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xr6t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364afc31-5c38-471a-b645-c6d4388a3dc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06731b1d3104dfac7eb266cb8d990b03d06867fa1d4f58331fd99c529790f465\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xr6t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.754258 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a1f477-b8b1-4953-a08d-acfca759097d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f09f8bac8a4260abf06c230bcc4b1caa11a1e7c301a192f4f67628f12eaf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4d1f1527f02388b11933533c4618558e8c3bbdb115cb7bfe4c2ce33bbb73a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1884fa228387fefddef615f52fe1244092dd73860da99a9aa97d046eb16ef1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6457cacefb5f594c79b510993d583dbc46edd42f0d2c9e64317fccde555b85da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e91542da5f32013bed7a57111fdbc81de414890daa7cb03245b51826ed5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0c6c91950870b99013778863eccb6309c78616a0cfeedc62d2892023eaccb46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9580d6a5bf1b2772aa4a9f8c7a364e35601b8e3a2923a5ec11328107608fab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680c6b4e4fdd65fbe5810049249c0d9fe72dbbafbf5ff80a1de6ac441df00141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:02 crc kubenswrapper[4941]: I0227 19:37:02.767200 4941 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c6948e-2810-45dd-a4b0-eac6107c7799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T19:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T19:35:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 19:35:39.919761 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 19:35:39.919871 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 19:35:39.920561 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2242982959/tls.crt::/tmp/serving-cert-2242982959/tls.key\\\\\\\"\\\\nI0227 19:35:40.361340 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 19:35:40.363745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 19:35:40.363763 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 19:35:40.363782 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 19:35:40.363787 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 19:35:40.367578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0227 19:35:40.367593 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 19:35:40.367604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367610 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 19:35:40.367615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 19:35:40.367620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 19:35:40.367624 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 19:35:40.367628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 19:35:40.369024 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T19:35:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T19:34:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T19:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T19:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T19:34:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:02Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.271952 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.272545 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:07.272519572 +0000 UTC m=+205.533660032 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.373092 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.373347 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.373393 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.373415 4941 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.373544 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 19:38:07.373515323 +0000 UTC m=+205.634655783 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.466655 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.466730 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.466775 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.466901 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.466911 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.466990 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.467165 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.467220 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.474267 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.474334 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474353 4941 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.474361 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474412 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:38:07.474392972 +0000 UTC m=+205.735533462 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474451 4941 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: I0227 19:37:03.474528 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474538 4941 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474580 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474593 4941 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474604 4941 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474613 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 19:38:07.474591997 +0000 UTC m=+205.735732417 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474642 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 19:38:07.474626468 +0000 UTC m=+205.735766978 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 19:37:03 crc kubenswrapper[4941]: E0227 19:37:03.474664 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs podName:68a8b3ac-f7b7-412b-8c30-96c44ba947c9 nodeName:}" failed. No retries permitted until 2026-02-27 19:38:07.474652639 +0000 UTC m=+205.735793179 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs") pod "network-metrics-daemon-mvmp7" (UID: "68a8b3ac-f7b7-412b-8c30-96c44ba947c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.517736 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.517783 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.517794 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.517810 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.517822 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:37:04Z","lastTransitionTime":"2026-02-27T19:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:37:04 crc kubenswrapper[4941]: E0227 19:37:04.538763 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.543458 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.543552 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.543573 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.543600 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.543618 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:37:04Z","lastTransitionTime":"2026-02-27T19:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:37:04 crc kubenswrapper[4941]: E0227 19:37:04.560429 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.565204 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.565245 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.565262 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.565284 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.565302 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:37:04Z","lastTransitionTime":"2026-02-27T19:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:37:04 crc kubenswrapper[4941]: E0227 19:37:04.583549 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.587653 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.587697 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.587708 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.587724 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.587735 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:37:04Z","lastTransitionTime":"2026-02-27T19:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:37:04 crc kubenswrapper[4941]: E0227 19:37:04.603260 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.607919 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.607997 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.608011 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.608029 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:37:04 crc kubenswrapper[4941]: I0227 19:37:04.608066 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:37:04Z","lastTransitionTime":"2026-02-27T19:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:37:04 crc kubenswrapper[4941]: E0227 19:37:04.622079 4941 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T19:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8033ec25-48ed-4948-8194-eb2027952881\\\",\\\"systemUUID\\\":\\\"adbc36aa-8a3d-41e4-9e2f-14e206f3a4ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T19:37:04Z is after 2025-08-24T17:21:41Z" Feb 27 19:37:04 crc kubenswrapper[4941]: E0227 19:37:04.622222 4941 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 19:37:05 crc kubenswrapper[4941]: I0227 19:37:05.466773 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:05 crc kubenswrapper[4941]: I0227 19:37:05.466830 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:05 crc kubenswrapper[4941]: I0227 19:37:05.466830 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:05 crc kubenswrapper[4941]: E0227 19:37:05.466932 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:05 crc kubenswrapper[4941]: I0227 19:37:05.466961 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:05 crc kubenswrapper[4941]: E0227 19:37:05.467056 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:05 crc kubenswrapper[4941]: E0227 19:37:05.467163 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:05 crc kubenswrapper[4941]: E0227 19:37:05.467239 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:07 crc kubenswrapper[4941]: I0227 19:37:07.466382 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:07 crc kubenswrapper[4941]: I0227 19:37:07.466436 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:07 crc kubenswrapper[4941]: I0227 19:37:07.466441 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:07 crc kubenswrapper[4941]: I0227 19:37:07.466392 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:07 crc kubenswrapper[4941]: E0227 19:37:07.466542 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:07 crc kubenswrapper[4941]: E0227 19:37:07.466660 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:07 crc kubenswrapper[4941]: E0227 19:37:07.466750 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:07 crc kubenswrapper[4941]: E0227 19:37:07.466823 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:07 crc kubenswrapper[4941]: E0227 19:37:07.593242 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:09 crc kubenswrapper[4941]: I0227 19:37:09.466967 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:09 crc kubenswrapper[4941]: I0227 19:37:09.467012 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:09 crc kubenswrapper[4941]: I0227 19:37:09.467064 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:09 crc kubenswrapper[4941]: E0227 19:37:09.467111 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:09 crc kubenswrapper[4941]: I0227 19:37:09.467147 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:09 crc kubenswrapper[4941]: E0227 19:37:09.467307 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:09 crc kubenswrapper[4941]: E0227 19:37:09.467358 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:09 crc kubenswrapper[4941]: E0227 19:37:09.467634 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.466079 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.466164 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.466181 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:11 crc kubenswrapper[4941]: E0227 19:37:11.466277 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.466325 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:11 crc kubenswrapper[4941]: E0227 19:37:11.466462 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:11 crc kubenswrapper[4941]: E0227 19:37:11.466583 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:11 crc kubenswrapper[4941]: E0227 19:37:11.467036 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.467200 4941 scope.go:117] "RemoveContainer" containerID="8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261" Feb 27 19:37:11 crc kubenswrapper[4941]: E0227 19:37:11.467374 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.550059 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podStartSLOduration=106.550037326 podStartE2EDuration="1m46.550037326s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.5370844 +0000 UTC m=+149.798224830" watchObservedRunningTime="2026-02-27 19:37:11.550037326 +0000 UTC m=+149.811177756" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.561273 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8pmzp" podStartSLOduration=106.561231302 podStartE2EDuration="1m46.561231302s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.55938435 +0000 UTC m=+149.820524780" watchObservedRunningTime="2026-02-27 19:37:11.561231302 +0000 UTC m=+149.822371722" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.585418 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=44.585395934 podStartE2EDuration="44.585395934s" podCreationTimestamp="2026-02-27 19:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.585233919 +0000 UTC m=+149.846374349" watchObservedRunningTime="2026-02-27 19:37:11.585395934 +0000 UTC m=+149.846536354" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.603616 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.603598768 podStartE2EDuration="1m11.603598768s" podCreationTimestamp="2026-02-27 19:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.603453454 +0000 UTC m=+149.864593884" watchObservedRunningTime="2026-02-27 19:37:11.603598768 +0000 UTC m=+149.864739188" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.621119 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=27.621099052 podStartE2EDuration="27.621099052s" podCreationTimestamp="2026-02-27 19:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.620851415 +0000 UTC m=+149.881991845" watchObservedRunningTime="2026-02-27 19:37:11.621099052 +0000 UTC m=+149.882239482" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.667786 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lt4bk" podStartSLOduration=106.66776386 podStartE2EDuration="1m46.66776386s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.65396289 +0000 UTC m=+149.915103350" watchObservedRunningTime="2026-02-27 19:37:11.66776386 +0000 UTC m=+149.928904290" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.682230 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xr6t6" podStartSLOduration=106.682211008 podStartE2EDuration="1m46.682211008s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.668338466 +0000 UTC m=+149.929478896" watchObservedRunningTime="2026-02-27 19:37:11.682211008 +0000 UTC m=+149.943351438" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.682752 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=62.682744263000004 podStartE2EDuration="1m2.682744263s" podCreationTimestamp="2026-02-27 19:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.682364492 +0000 UTC m=+149.943504932" watchObservedRunningTime="2026-02-27 19:37:11.682744263 +0000 UTC m=+149.943884693" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.707053 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=21.707032638 podStartE2EDuration="21.707032638s" podCreationTimestamp="2026-02-27 19:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.694511985 +0000 UTC m=+149.955652405" watchObservedRunningTime="2026-02-27 19:37:11.707032638 +0000 UTC m=+149.968173058" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.751831 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9fhpx" podStartSLOduration=106.751811303 podStartE2EDuration="1m46.751811303s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.730978644 +0000 UTC m=+149.992119074" watchObservedRunningTime="2026-02-27 19:37:11.751811303 +0000 UTC m=+150.012951733" Feb 27 19:37:11 crc kubenswrapper[4941]: I0227 19:37:11.768799 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wbhlr" podStartSLOduration=106.768780542 podStartE2EDuration="1m46.768780542s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:11.768405221 +0000 UTC m=+150.029545641" watchObservedRunningTime="2026-02-27 19:37:11.768780542 +0000 UTC m=+150.029920962" Feb 27 19:37:12 crc kubenswrapper[4941]: E0227 19:37:12.594546 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:13 crc kubenswrapper[4941]: I0227 19:37:13.466220 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:13 crc kubenswrapper[4941]: I0227 19:37:13.466268 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:13 crc kubenswrapper[4941]: I0227 19:37:13.466243 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:13 crc kubenswrapper[4941]: E0227 19:37:13.466363 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:13 crc kubenswrapper[4941]: I0227 19:37:13.466330 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:13 crc kubenswrapper[4941]: E0227 19:37:13.466519 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:13 crc kubenswrapper[4941]: E0227 19:37:13.466671 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:13 crc kubenswrapper[4941]: E0227 19:37:13.466733 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.661871 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.661945 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.661960 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.661985 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.662004 4941 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T19:37:14Z","lastTransitionTime":"2026-02-27T19:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.717415 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h"] Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.717803 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.720224 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.720456 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.720656 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.720834 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.886415 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c3822d3-1405-446a-81f1-ea7f4c03d553-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.886560 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c3822d3-1405-446a-81f1-ea7f4c03d553-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.886615 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c3822d3-1405-446a-81f1-ea7f4c03d553-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.886868 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3822d3-1405-446a-81f1-ea7f4c03d553-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.887000 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c3822d3-1405-446a-81f1-ea7f4c03d553-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.988189 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c3822d3-1405-446a-81f1-ea7f4c03d553-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.988272 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c3822d3-1405-446a-81f1-ea7f4c03d553-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.988311 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c3822d3-1405-446a-81f1-ea7f4c03d553-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.988316 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c3822d3-1405-446a-81f1-ea7f4c03d553-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.988374 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3822d3-1405-446a-81f1-ea7f4c03d553-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.988403 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c3822d3-1405-446a-81f1-ea7f4c03d553-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.988521 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c3822d3-1405-446a-81f1-ea7f4c03d553-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.989500 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c3822d3-1405-446a-81f1-ea7f4c03d553-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:14 crc kubenswrapper[4941]: I0227 19:37:14.995963 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3822d3-1405-446a-81f1-ea7f4c03d553-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.008392 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c3822d3-1405-446a-81f1-ea7f4c03d553-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7mf9h\" (UID: \"9c3822d3-1405-446a-81f1-ea7f4c03d553\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.031735 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" Feb 27 19:37:15 crc kubenswrapper[4941]: W0227 19:37:15.054251 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c3822d3_1405_446a_81f1_ea7f4c03d553.slice/crio-c7a6989daf2e07cdabd9baf8166ce81ee6c5647046aaec3cf68b05d08a5e76d3 WatchSource:0}: Error finding container c7a6989daf2e07cdabd9baf8166ce81ee6c5647046aaec3cf68b05d08a5e76d3: Status 404 returned error can't find the container with id c7a6989daf2e07cdabd9baf8166ce81ee6c5647046aaec3cf68b05d08a5e76d3 Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.139249 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" event={"ID":"9c3822d3-1405-446a-81f1-ea7f4c03d553","Type":"ContainerStarted","Data":"c7a6989daf2e07cdabd9baf8166ce81ee6c5647046aaec3cf68b05d08a5e76d3"} Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.467094 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.467143 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.467290 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:15 crc kubenswrapper[4941]: E0227 19:37:15.467275 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.467357 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:15 crc kubenswrapper[4941]: E0227 19:37:15.467518 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:15 crc kubenswrapper[4941]: E0227 19:37:15.467568 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:15 crc kubenswrapper[4941]: E0227 19:37:15.467643 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.675505 4941 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 27 19:37:15 crc kubenswrapper[4941]: I0227 19:37:15.687226 4941 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 19:37:16 crc kubenswrapper[4941]: I0227 19:37:16.145807 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" event={"ID":"9c3822d3-1405-446a-81f1-ea7f4c03d553","Type":"ContainerStarted","Data":"adb37faacdec3c503e369bd4c5eaeb99e923d9c3c1cad749f018237ef87413e0"} Feb 27 19:37:16 crc kubenswrapper[4941]: I0227 19:37:16.167767 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7mf9h" podStartSLOduration=111.16774445 podStartE2EDuration="1m51.16774445s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:16.166279739 +0000 UTC m=+154.427420179" watchObservedRunningTime="2026-02-27 19:37:16.16774445 +0000 UTC m=+154.428884890" Feb 27 19:37:17 crc kubenswrapper[4941]: I0227 19:37:17.466415 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:17 crc kubenswrapper[4941]: I0227 19:37:17.466557 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:17 crc kubenswrapper[4941]: I0227 19:37:17.466582 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:17 crc kubenswrapper[4941]: E0227 19:37:17.466690 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:17 crc kubenswrapper[4941]: I0227 19:37:17.466453 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:17 crc kubenswrapper[4941]: E0227 19:37:17.466976 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:17 crc kubenswrapper[4941]: E0227 19:37:17.467001 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:17 crc kubenswrapper[4941]: E0227 19:37:17.467141 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:17 crc kubenswrapper[4941]: E0227 19:37:17.595985 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:19 crc kubenswrapper[4941]: I0227 19:37:19.466615 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:19 crc kubenswrapper[4941]: I0227 19:37:19.466653 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:19 crc kubenswrapper[4941]: E0227 19:37:19.466767 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:19 crc kubenswrapper[4941]: E0227 19:37:19.466945 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:19 crc kubenswrapper[4941]: I0227 19:37:19.467180 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:19 crc kubenswrapper[4941]: E0227 19:37:19.467248 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:19 crc kubenswrapper[4941]: I0227 19:37:19.467365 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:19 crc kubenswrapper[4941]: E0227 19:37:19.467436 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:21 crc kubenswrapper[4941]: I0227 19:37:21.466324 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:21 crc kubenswrapper[4941]: I0227 19:37:21.466361 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:21 crc kubenswrapper[4941]: I0227 19:37:21.466323 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:21 crc kubenswrapper[4941]: I0227 19:37:21.466356 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:21 crc kubenswrapper[4941]: E0227 19:37:21.466530 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:21 crc kubenswrapper[4941]: E0227 19:37:21.466897 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:21 crc kubenswrapper[4941]: E0227 19:37:21.467003 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:21 crc kubenswrapper[4941]: E0227 19:37:21.467084 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:22 crc kubenswrapper[4941]: E0227 19:37:22.596644 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:23 crc kubenswrapper[4941]: I0227 19:37:23.466342 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:23 crc kubenswrapper[4941]: I0227 19:37:23.466535 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:23 crc kubenswrapper[4941]: I0227 19:37:23.466609 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:23 crc kubenswrapper[4941]: E0227 19:37:23.466554 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:23 crc kubenswrapper[4941]: I0227 19:37:23.466725 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:23 crc kubenswrapper[4941]: E0227 19:37:23.467258 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:23 crc kubenswrapper[4941]: E0227 19:37:23.467324 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:23 crc kubenswrapper[4941]: E0227 19:37:23.467461 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:23 crc kubenswrapper[4941]: I0227 19:37:23.467754 4941 scope.go:117] "RemoveContainer" containerID="8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261" Feb 27 19:37:23 crc kubenswrapper[4941]: E0227 19:37:23.468021 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" Feb 27 19:37:25 crc kubenswrapper[4941]: I0227 19:37:25.466587 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:25 crc kubenswrapper[4941]: I0227 19:37:25.466689 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:25 crc kubenswrapper[4941]: I0227 19:37:25.466690 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:25 crc kubenswrapper[4941]: E0227 19:37:25.466799 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:25 crc kubenswrapper[4941]: E0227 19:37:25.466916 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:25 crc kubenswrapper[4941]: E0227 19:37:25.466995 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:25 crc kubenswrapper[4941]: I0227 19:37:25.467015 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:25 crc kubenswrapper[4941]: E0227 19:37:25.467154 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:27 crc kubenswrapper[4941]: I0227 19:37:27.466924 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:27 crc kubenswrapper[4941]: I0227 19:37:27.466985 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:27 crc kubenswrapper[4941]: I0227 19:37:27.466998 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:27 crc kubenswrapper[4941]: E0227 19:37:27.467076 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:27 crc kubenswrapper[4941]: I0227 19:37:27.467155 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:27 crc kubenswrapper[4941]: E0227 19:37:27.467242 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:27 crc kubenswrapper[4941]: E0227 19:37:27.467267 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:27 crc kubenswrapper[4941]: E0227 19:37:27.467313 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:27 crc kubenswrapper[4941]: E0227 19:37:27.598367 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:29 crc kubenswrapper[4941]: I0227 19:37:29.466115 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:29 crc kubenswrapper[4941]: I0227 19:37:29.466157 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:29 crc kubenswrapper[4941]: I0227 19:37:29.466154 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:29 crc kubenswrapper[4941]: I0227 19:37:29.466116 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:29 crc kubenswrapper[4941]: E0227 19:37:29.466330 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:29 crc kubenswrapper[4941]: E0227 19:37:29.466453 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:29 crc kubenswrapper[4941]: E0227 19:37:29.466616 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:29 crc kubenswrapper[4941]: E0227 19:37:29.466830 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:31 crc kubenswrapper[4941]: I0227 19:37:31.466846 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:31 crc kubenswrapper[4941]: I0227 19:37:31.466962 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:31 crc kubenswrapper[4941]: I0227 19:37:31.467240 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:31 crc kubenswrapper[4941]: I0227 19:37:31.467254 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:31 crc kubenswrapper[4941]: E0227 19:37:31.467262 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:31 crc kubenswrapper[4941]: E0227 19:37:31.467370 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:31 crc kubenswrapper[4941]: E0227 19:37:31.467464 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:31 crc kubenswrapper[4941]: E0227 19:37:31.467771 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:32 crc kubenswrapper[4941]: E0227 19:37:32.598994 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.199802 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/1.log" Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.200510 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/0.log" Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.200562 4941 generic.go:334] "Generic (PLEG): container finished" podID="16d71936-7f0d-4add-a17b-400840d5fce2" containerID="98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf" exitCode=1 Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.200594 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lt4bk" event={"ID":"16d71936-7f0d-4add-a17b-400840d5fce2","Type":"ContainerDied","Data":"98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf"} Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.200629 4941 scope.go:117] "RemoveContainer" containerID="672273373c4a8a9f7dfe0be0e32bc085f64d3383527075f2fc3d7bc5b01a4d65" Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.200971 4941 scope.go:117] "RemoveContainer" containerID="98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf" Feb 27 19:37:33 crc kubenswrapper[4941]: E0227 19:37:33.201168 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lt4bk_openshift-multus(16d71936-7f0d-4add-a17b-400840d5fce2)\"" pod="openshift-multus/multus-lt4bk" podUID="16d71936-7f0d-4add-a17b-400840d5fce2" Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.466441 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.466565 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:33 crc kubenswrapper[4941]: E0227 19:37:33.466930 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.466589 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:33 crc kubenswrapper[4941]: E0227 19:37:33.467005 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:33 crc kubenswrapper[4941]: E0227 19:37:33.467087 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:33 crc kubenswrapper[4941]: I0227 19:37:33.466589 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:33 crc kubenswrapper[4941]: E0227 19:37:33.467219 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:34 crc kubenswrapper[4941]: I0227 19:37:34.205373 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/1.log" Feb 27 19:37:35 crc kubenswrapper[4941]: I0227 19:37:35.466537 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:35 crc kubenswrapper[4941]: E0227 19:37:35.466671 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:35 crc kubenswrapper[4941]: I0227 19:37:35.466723 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:35 crc kubenswrapper[4941]: I0227 19:37:35.466768 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:35 crc kubenswrapper[4941]: I0227 19:37:35.466718 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:35 crc kubenswrapper[4941]: E0227 19:37:35.466866 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:35 crc kubenswrapper[4941]: E0227 19:37:35.466969 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:35 crc kubenswrapper[4941]: E0227 19:37:35.467033 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:36 crc kubenswrapper[4941]: I0227 19:37:36.467640 4941 scope.go:117] "RemoveContainer" containerID="8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261" Feb 27 19:37:36 crc kubenswrapper[4941]: E0227 19:37:36.467841 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v74b7_openshift-ovn-kubernetes(bb476894-9c4f-487a-bfa6-5babb5243c0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" Feb 27 19:37:37 crc kubenswrapper[4941]: I0227 19:37:37.466128 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:37 crc kubenswrapper[4941]: I0227 19:37:37.466137 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:37 crc kubenswrapper[4941]: I0227 19:37:37.466186 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:37 crc kubenswrapper[4941]: I0227 19:37:37.466216 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:37 crc kubenswrapper[4941]: E0227 19:37:37.466340 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:37 crc kubenswrapper[4941]: E0227 19:37:37.466453 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:37 crc kubenswrapper[4941]: E0227 19:37:37.466567 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:37 crc kubenswrapper[4941]: E0227 19:37:37.466716 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:37 crc kubenswrapper[4941]: E0227 19:37:37.599792 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:39 crc kubenswrapper[4941]: I0227 19:37:39.466978 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:39 crc kubenswrapper[4941]: I0227 19:37:39.467016 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:39 crc kubenswrapper[4941]: E0227 19:37:39.467114 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:39 crc kubenswrapper[4941]: I0227 19:37:39.467272 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:39 crc kubenswrapper[4941]: I0227 19:37:39.467282 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:39 crc kubenswrapper[4941]: E0227 19:37:39.467388 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:39 crc kubenswrapper[4941]: E0227 19:37:39.467603 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:39 crc kubenswrapper[4941]: E0227 19:37:39.467706 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:41 crc kubenswrapper[4941]: I0227 19:37:41.466891 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:41 crc kubenswrapper[4941]: I0227 19:37:41.466931 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:41 crc kubenswrapper[4941]: I0227 19:37:41.466931 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:41 crc kubenswrapper[4941]: E0227 19:37:41.467047 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:41 crc kubenswrapper[4941]: I0227 19:37:41.467126 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:41 crc kubenswrapper[4941]: E0227 19:37:41.467230 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:41 crc kubenswrapper[4941]: E0227 19:37:41.467376 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:41 crc kubenswrapper[4941]: E0227 19:37:41.467552 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:42 crc kubenswrapper[4941]: E0227 19:37:42.601385 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:43 crc kubenswrapper[4941]: I0227 19:37:43.466666 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:43 crc kubenswrapper[4941]: E0227 19:37:43.466840 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:43 crc kubenswrapper[4941]: I0227 19:37:43.467128 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:43 crc kubenswrapper[4941]: E0227 19:37:43.467266 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:43 crc kubenswrapper[4941]: I0227 19:37:43.467696 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:43 crc kubenswrapper[4941]: I0227 19:37:43.467858 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:43 crc kubenswrapper[4941]: E0227 19:37:43.468118 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:43 crc kubenswrapper[4941]: E0227 19:37:43.468440 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:45 crc kubenswrapper[4941]: I0227 19:37:45.466092 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:45 crc kubenswrapper[4941]: I0227 19:37:45.466092 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:45 crc kubenswrapper[4941]: I0227 19:37:45.466400 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:45 crc kubenswrapper[4941]: E0227 19:37:45.466253 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:45 crc kubenswrapper[4941]: I0227 19:37:45.466115 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:45 crc kubenswrapper[4941]: E0227 19:37:45.466662 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:45 crc kubenswrapper[4941]: E0227 19:37:45.466453 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:45 crc kubenswrapper[4941]: E0227 19:37:45.466976 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:47 crc kubenswrapper[4941]: I0227 19:37:47.466064 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:47 crc kubenswrapper[4941]: I0227 19:37:47.466016 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:47 crc kubenswrapper[4941]: E0227 19:37:47.466205 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:47 crc kubenswrapper[4941]: I0227 19:37:47.466316 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:47 crc kubenswrapper[4941]: I0227 19:37:47.466350 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:47 crc kubenswrapper[4941]: E0227 19:37:47.466570 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:47 crc kubenswrapper[4941]: E0227 19:37:47.466691 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:47 crc kubenswrapper[4941]: E0227 19:37:47.466897 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:47 crc kubenswrapper[4941]: E0227 19:37:47.602872 4941 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:37:48 crc kubenswrapper[4941]: I0227 19:37:48.466754 4941 scope.go:117] "RemoveContainer" containerID="98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf" Feb 27 19:37:49 crc kubenswrapper[4941]: I0227 19:37:49.259069 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/1.log" Feb 27 19:37:49 crc kubenswrapper[4941]: I0227 19:37:49.259405 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lt4bk" event={"ID":"16d71936-7f0d-4add-a17b-400840d5fce2","Type":"ContainerStarted","Data":"47b17cd33fafc9994c48d1c40ce7cd487378811061d35a5d74b8b3e7104328dc"} Feb 27 19:37:49 crc kubenswrapper[4941]: I0227 19:37:49.466844 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:49 crc kubenswrapper[4941]: I0227 19:37:49.466890 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:49 crc kubenswrapper[4941]: I0227 19:37:49.466906 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:49 crc kubenswrapper[4941]: E0227 19:37:49.466960 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:49 crc kubenswrapper[4941]: I0227 19:37:49.466849 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:49 crc kubenswrapper[4941]: E0227 19:37:49.467034 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:49 crc kubenswrapper[4941]: E0227 19:37:49.467697 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:49 crc kubenswrapper[4941]: E0227 19:37:49.467793 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:49 crc kubenswrapper[4941]: I0227 19:37:49.468267 4941 scope.go:117] "RemoveContainer" containerID="8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261" Feb 27 19:37:50 crc kubenswrapper[4941]: I0227 19:37:50.266046 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/3.log" Feb 27 19:37:50 crc kubenswrapper[4941]: I0227 19:37:50.268760 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mvmp7"] Feb 27 19:37:50 crc kubenswrapper[4941]: I0227 19:37:50.270033 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:50 crc kubenswrapper[4941]: E0227 19:37:50.270245 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:50 crc kubenswrapper[4941]: I0227 19:37:50.270620 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerStarted","Data":"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a"} Feb 27 19:37:50 crc kubenswrapper[4941]: I0227 19:37:50.285182 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:37:51 crc kubenswrapper[4941]: I0227 19:37:51.466357 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:51 crc kubenswrapper[4941]: E0227 19:37:51.467379 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 19:37:51 crc kubenswrapper[4941]: I0227 19:37:51.466521 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:51 crc kubenswrapper[4941]: E0227 19:37:51.467484 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 19:37:51 crc kubenswrapper[4941]: I0227 19:37:51.466358 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:51 crc kubenswrapper[4941]: E0227 19:37:51.467540 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 19:37:52 crc kubenswrapper[4941]: I0227 19:37:52.466935 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:52 crc kubenswrapper[4941]: E0227 19:37:52.467424 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mvmp7" podUID="68a8b3ac-f7b7-412b-8c30-96c44ba947c9" Feb 27 19:37:53 crc kubenswrapper[4941]: I0227 19:37:53.466697 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:37:53 crc kubenswrapper[4941]: I0227 19:37:53.466759 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:37:53 crc kubenswrapper[4941]: I0227 19:37:53.466817 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:37:53 crc kubenswrapper[4941]: I0227 19:37:53.468917 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 19:37:53 crc kubenswrapper[4941]: I0227 19:37:53.469309 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 19:37:53 crc kubenswrapper[4941]: I0227 19:37:53.469321 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 19:37:53 crc kubenswrapper[4941]: I0227 19:37:53.469738 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 19:37:54 crc kubenswrapper[4941]: I0227 19:37:54.466779 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:37:54 crc kubenswrapper[4941]: I0227 19:37:54.469736 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 19:37:54 crc kubenswrapper[4941]: I0227 19:37:54.470867 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.599369 4941 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.639832 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podStartSLOduration=150.639806879 podStartE2EDuration="2m30.639806879s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:37:50.329409318 +0000 UTC m=+188.590549758" watchObservedRunningTime="2026-02-27 19:37:55.639806879 +0000 UTC m=+193.900947309" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.640340 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.640848 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.647274 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.647395 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.647530 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.647826 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.649967 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.650626 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.651734 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.652803 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.654452 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-68mk8"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.654869 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.655263 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.655388 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.665601 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.667782 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.668135 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.668502 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.669156 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9s2rr"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.669341 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.669546 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-w7qqt"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.669693 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.669728 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.669846 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w7qqt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.669898 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.669934 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.670226 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.670267 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.673440 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.673922 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.673940 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.674115 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.674297 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.674357 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.674431 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.674554 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.674662 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.674750 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.674667 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.680700 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.681249 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.682015 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.682164 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.682199 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.682273 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.682463 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.682494 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.682928 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.686026 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.686304 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.686348 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.686877 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.687158 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.687399 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.687432 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.687593 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.687647 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.687690 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.691164 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.691960 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9tm22"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.692981 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.693023 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.698084 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.698703 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.698960 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.699190 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.699590 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.699681 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.699821 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lx7mk"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.699944 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.700168 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.700327 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.700393 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.700494 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.709845 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.710202 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.711042 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.714240 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.718579 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.722512 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.722784 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.722645 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.723882 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.724254 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.722705 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.724392 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.724632 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725097 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-images\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725127 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-config\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725157 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-client\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725179 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/368c3d3a-9216-40ce-a0b7-17d490873b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725204 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-ca\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725223 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnsb5\" (UniqueName: \"kubernetes.io/projected/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-kube-api-access-qnsb5\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725244 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-audit-policies\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725271 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gknxx\" (UniqueName: \"kubernetes.io/projected/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-kube-api-access-gknxx\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725295 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb507a2-c21f-457c-9677-2afefb9a4d8c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725315 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/368c3d3a-9216-40ce-a0b7-17d490873b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725339 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725358 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m68jb\" (UniqueName: \"kubernetes.io/projected/297422d4-75d1-4b5e-a106-408b239e43c0-kube-api-access-m68jb\") pod \"cluster-samples-operator-665b6dd947-s8m66\" (UID: \"297422d4-75d1-4b5e-a106-408b239e43c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725393 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mwd\" (UniqueName: \"kubernetes.io/projected/7ae7c46e-c974-471c-8f96-1dc0fd38e49d-kube-api-access-b4mwd\") pod \"downloads-7954f5f757-w7qqt\" (UID: \"7ae7c46e-c974-471c-8f96-1dc0fd38e49d\") " pod="openshift-console/downloads-7954f5f757-w7qqt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725414 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-serving-cert\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725433 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725490 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-etcd-client\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725513 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-encryption-config\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725535 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/368c3d3a-9216-40ce-a0b7-17d490873b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725568 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-config\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725597 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-serving-cert\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725616 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-service-ca\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725636 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcszj\" (UniqueName: \"kubernetes.io/projected/368c3d3a-9216-40ce-a0b7-17d490873b85-kube-api-access-dcszj\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725657 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725679 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb507a2-c21f-457c-9677-2afefb9a4d8c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725701 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c252274-a47f-4da8-b561-bbc47afaa507-audit-dir\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725722 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4mvx\" (UniqueName: \"kubernetes.io/projected/4fb507a2-c21f-457c-9677-2afefb9a4d8c-kube-api-access-h4mvx\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725745 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc287\" (UniqueName: \"kubernetes.io/projected/0c252274-a47f-4da8-b561-bbc47afaa507-kube-api-access-zc287\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.725765 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/297422d4-75d1-4b5e-a106-408b239e43c0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s8m66\" (UID: \"297422d4-75d1-4b5e-a106-408b239e43c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.726190 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.726386 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.726893 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.727639 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.728102 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.729449 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.732142 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.732663 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.734522 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.736317 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.736500 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.736670 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.736755 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.738004 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.742345 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.742614 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.742842 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.743387 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.744067 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.744503 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kptkw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.744961 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b9bp4"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.745335 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.745811 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.747235 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.747751 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.749076 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.749727 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.749934 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750270 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750433 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750464 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750280 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750054 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdq5q"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750610 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750142 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750168 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750674 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750804 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750936 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750960 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.750962 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.753124 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.756634 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.757575 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.763546 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.763835 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.765284 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.766433 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.767401 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.770776 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.772255 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.773661 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.776293 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.775236 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5pbpw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.777143 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.778351 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.778777 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.784437 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.792601 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2h46l"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.795870 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.796660 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.799259 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zln9d"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.800646 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.801000 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.802069 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fpgvv"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.802430 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.805294 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.808780 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.808952 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.809180 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.810269 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.811104 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.811744 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.812027 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.812589 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.812752 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.813091 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.813458 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.813971 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kvkk5"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.814300 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.814838 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v9sth"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.815189 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.815766 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z7cwj"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.816367 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.817667 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.818057 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.818598 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.818963 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.819579 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.820036 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.821938 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.822413 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzs75"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.822678 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.822751 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.822804 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.823659 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.823762 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.824829 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-68mk8"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.825506 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-52mbw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.825919 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826555 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0386aa95-c139-4357-8a92-c610b4b32709-config\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826581 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826600 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6514c538-c59e-4743-97a6-3c11d74fa12e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826618 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-config\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826635 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-default-certificate\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826651 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-config\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826673 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-serving-cert\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826687 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50000567-dad9-4bab-9db9-ecd69cf07609-config\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826702 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826718 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-config\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826734 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tf9v\" (UniqueName: \"kubernetes.io/projected/211c8e09-1aae-466b-8dba-daab4d60d3cd-kube-api-access-5tf9v\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826751 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-service-ca\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826766 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcszj\" (UniqueName: \"kubernetes.io/projected/368c3d3a-9216-40ce-a0b7-17d490873b85-kube-api-access-dcszj\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826791 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bdd7236-3746-4764-8724-aab038391bea-metrics-tls\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826806 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bdd7236-3746-4764-8724-aab038391bea-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826821 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c252274-a47f-4da8-b561-bbc47afaa507-audit-dir\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826838 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826855 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb507a2-c21f-457c-9677-2afefb9a4d8c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826874 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4mvx\" (UniqueName: \"kubernetes.io/projected/4fb507a2-c21f-457c-9677-2afefb9a4d8c-kube-api-access-h4mvx\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826892 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-encryption-config\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826909 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826925 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-auth-proxy-config\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826940 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826955 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vx4\" (UniqueName: \"kubernetes.io/projected/51c910e0-3160-484d-9b5c-7e606f3a1a8d-kube-api-access-j2vx4\") pod \"migrator-59844c95c7-nb4c4\" (UID: \"51c910e0-3160-484d-9b5c-7e606f3a1a8d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826971 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/297422d4-75d1-4b5e-a106-408b239e43c0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s8m66\" (UID: \"297422d4-75d1-4b5e-a106-408b239e43c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.826986 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-image-import-ca\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827002 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc287\" (UniqueName: \"kubernetes.io/projected/0c252274-a47f-4da8-b561-bbc47afaa507-kube-api-access-zc287\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827022 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-images\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827037 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-stats-auth\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827052 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njcxn\" (UniqueName: \"kubernetes.io/projected/1bdd7236-3746-4764-8724-aab038391bea-kube-api-access-njcxn\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827067 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-audit\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827082 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-config\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827096 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-client\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827111 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/368c3d3a-9216-40ce-a0b7-17d490873b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827129 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e912d24-8387-4a52-97ee-cfdb927b58cf-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827152 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386aa95-c139-4357-8a92-c610b4b32709-serving-cert\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827166 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnbc8\" (UniqueName: \"kubernetes.io/projected/815c7d43-e40d-4519-80ce-13df0e8d63ff-kube-api-access-nnbc8\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827172 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gndt2"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827183 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827200 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-ca\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827216 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnsb5\" (UniqueName: \"kubernetes.io/projected/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-kube-api-access-qnsb5\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827235 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-audit-policies\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827252 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-machine-approver-tls\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827274 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-config\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827294 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e912d24-8387-4a52-97ee-cfdb927b58cf-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827310 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bdd7236-3746-4764-8724-aab038391bea-trusted-ca\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827325 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827342 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827356 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/211c8e09-1aae-466b-8dba-daab4d60d3cd-audit-dir\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827370 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvf9\" (UniqueName: \"kubernetes.io/projected/6514c538-c59e-4743-97a6-3c11d74fa12e-kube-api-access-jcvf9\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827388 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gknxx\" (UniqueName: \"kubernetes.io/projected/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-kube-api-access-gknxx\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827406 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgljw\" (UniqueName: \"kubernetes.io/projected/0386aa95-c139-4357-8a92-c610b4b32709-kube-api-access-mgljw\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827422 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/368c3d3a-9216-40ce-a0b7-17d490873b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827437 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-etcd-client\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827501 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb507a2-c21f-457c-9677-2afefb9a4d8c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827521 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827537 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m68jb\" (UniqueName: \"kubernetes.io/projected/297422d4-75d1-4b5e-a106-408b239e43c0-kube-api-access-m68jb\") pod \"cluster-samples-operator-665b6dd947-s8m66\" (UID: \"297422d4-75d1-4b5e-a106-408b239e43c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827552 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-metrics-certs\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827570 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mwd\" (UniqueName: \"kubernetes.io/projected/7ae7c46e-c974-471c-8f96-1dc0fd38e49d-kube-api-access-b4mwd\") pod \"downloads-7954f5f757-w7qqt\" (UID: \"7ae7c46e-c974-471c-8f96-1dc0fd38e49d\") " pod="openshift-console/downloads-7954f5f757-w7qqt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827585 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827594 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827600 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-dir\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827615 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827631 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827645 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9f4b\" (UniqueName: \"kubernetes.io/projected/07c961ab-73e0-4b79-9d5d-f181aa535bca-kube-api-access-p9f4b\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827661 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-serving-cert\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827675 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-policies\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827691 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827707 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827722 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827743 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50000567-dad9-4bab-9db9-ecd69cf07609-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827758 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6514c538-c59e-4743-97a6-3c11d74fa12e-serving-cert\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827775 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxtv\" (UniqueName: \"kubernetes.io/projected/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-kube-api-access-bnxtv\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827789 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c961ab-73e0-4b79-9d5d-f181aa535bca-service-ca-bundle\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827805 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-serving-cert\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827821 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e912d24-8387-4a52-97ee-cfdb927b58cf-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827837 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-serving-cert\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827852 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827867 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0386aa95-c139-4357-8a92-c610b4b32709-trusted-ca\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827881 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827895 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827909 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-client-ca\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827925 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-encryption-config\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827941 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjgdc\" (UniqueName: \"kubernetes.io/projected/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-kube-api-access-zjgdc\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827957 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827973 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-etcd-client\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827989 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/368c3d3a-9216-40ce-a0b7-17d490873b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.828006 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htxdc\" (UniqueName: \"kubernetes.io/projected/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-kube-api-access-htxdc\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.828021 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50000567-dad9-4bab-9db9-ecd69cf07609-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.828035 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/211c8e09-1aae-466b-8dba-daab4d60d3cd-node-pullsecrets\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.828568 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-ca\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.828956 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-config\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.829074 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-audit-policies\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.829993 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-service-ca\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.830092 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c252274-a47f-4da8-b561-bbc47afaa507-audit-dir\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.831017 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-images\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.827553 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.831787 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.831815 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.831838 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9s2rr"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.831850 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.832214 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.833686 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/368c3d3a-9216-40ce-a0b7-17d490873b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.834640 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/297422d4-75d1-4b5e-a106-408b239e43c0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s8m66\" (UID: \"297422d4-75d1-4b5e-a106-408b239e43c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.836607 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-serving-cert\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.836904 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-etcd-client\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.837126 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.837193 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-config\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.837519 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c252274-a47f-4da8-b561-bbc47afaa507-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.837989 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb507a2-c21f-457c-9677-2afefb9a4d8c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.839066 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5pbpw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.840797 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb507a2-c21f-457c-9677-2afefb9a4d8c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.841296 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.842376 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.843521 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.843944 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.844102 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-etcd-client\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.844528 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-encryption-config\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.844769 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.845216 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c252274-a47f-4da8-b561-bbc47afaa507-serving-cert\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.846154 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.847441 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kvkk5"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.849370 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zln9d"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.850370 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9tm22"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.858888 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b9bp4"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.858951 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w7qqt"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.864206 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.864785 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.865445 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/368c3d3a-9216-40ce-a0b7-17d490873b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.868797 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.870520 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z7cwj"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.871641 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.872747 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kptkw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.874243 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.876125 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzs75"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.877486 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fpgvv"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.879117 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-knckv"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.880162 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-knckv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.881045 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w969m"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.882231 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.887345 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.890660 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.891273 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdq5q"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.893078 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-knckv"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.896457 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v9sth"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.897758 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2h46l"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.899022 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.900462 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-52mbw"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.902034 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.902787 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.904251 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.905385 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.906851 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w969m"] Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.923159 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928581 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-stats-auth\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928624 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88a3699f-6925-4273-a865-2070e5c8cb98-webhook-cert\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928641 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533e82fe-cb1a-462c-a5cb-097cce12524e-secret-volume\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928659 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928674 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb954dc5-4c54-4bec-9044-b36dc67e4920-signing-key\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928699 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e912d24-8387-4a52-97ee-cfdb927b58cf-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928715 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928732 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928898 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386aa95-c139-4357-8a92-c610b4b32709-serving-cert\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.928953 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-config\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929115 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/88a3699f-6925-4273-a865-2070e5c8cb98-tmpfs\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929144 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-proxy-tls\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929169 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929188 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/211c8e09-1aae-466b-8dba-daab4d60d3cd-audit-dir\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929206 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvf9\" (UniqueName: \"kubernetes.io/projected/6514c538-c59e-4743-97a6-3c11d74fa12e-kube-api-access-jcvf9\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929222 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-console-config\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929244 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e23993-d4b6-4929-b14c-c4a920b35760-serving-cert\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929262 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtpw9\" (UniqueName: \"kubernetes.io/projected/8116add2-08c1-41a0-8868-049cacc07ae0-kube-api-access-gtpw9\") pod \"ingress-canary-52mbw\" (UID: \"8116add2-08c1-41a0-8868-049cacc07ae0\") " pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929284 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-oauth-serving-cert\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929299 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88a3699f-6925-4273-a865-2070e5c8cb98-apiservice-cert\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929318 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929338 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48nt\" (UniqueName: \"kubernetes.io/projected/fe89c478-f5be-4a2b-8a2c-2448dd1f778a-kube-api-access-k48nt\") pod \"package-server-manager-789f6589d5-qhw8d\" (UID: \"fe89c478-f5be-4a2b-8a2c-2448dd1f778a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929356 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwbh\" (UniqueName: \"kubernetes.io/projected/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-kube-api-access-vkwbh\") pod \"multus-admission-controller-857f4d67dd-z7cwj\" (UID: \"8b8a2380-9d06-4f11-9ce7-4ca7be32767e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929374 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d55e2208-e68a-461a-873a-cb8503a7dfd1-srv-cert\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929390 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmpw\" (UniqueName: \"kubernetes.io/projected/15767381-d283-4c54-8c38-19d68dec9371-kube-api-access-wlmpw\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929393 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929412 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929460 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15767381-d283-4c54-8c38-19d68dec9371-console-serving-cert\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929489 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d55e2208-e68a-461a-873a-cb8503a7dfd1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929505 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47334960-9820-4b3a-be9d-e01a0e4a39ba-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929522 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6514c538-c59e-4743-97a6-3c11d74fa12e-serving-cert\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929536 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c961ab-73e0-4b79-9d5d-f181aa535bca-service-ca-bundle\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929551 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-config\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929560 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-config\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929568 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e912d24-8387-4a52-97ee-cfdb927b58cf-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929583 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-serving-cert\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929602 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0386aa95-c139-4357-8a92-c610b4b32709-trusted-ca\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929618 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929635 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929652 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8f6\" (UniqueName: \"kubernetes.io/projected/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-kube-api-access-gf8f6\") pod \"control-plane-machine-set-operator-78cbb6b69f-52wdw\" (UID: \"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929671 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjgdc\" (UniqueName: \"kubernetes.io/projected/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-kube-api-access-zjgdc\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929687 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htxdc\" (UniqueName: \"kubernetes.io/projected/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-kube-api-access-htxdc\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929703 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50000567-dad9-4bab-9db9-ecd69cf07609-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929719 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8116add2-08c1-41a0-8868-049cacc07ae0-cert\") pod \"ingress-canary-52mbw\" (UID: \"8116add2-08c1-41a0-8868-049cacc07ae0\") " pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929735 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbr75\" (UniqueName: \"kubernetes.io/projected/47334960-9820-4b3a-be9d-e01a0e4a39ba-kube-api-access-sbr75\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929751 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gfgw\" (UniqueName: \"kubernetes.io/projected/88a3699f-6925-4273-a865-2070e5c8cb98-kube-api-access-2gfgw\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929767 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929783 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqjn\" (UniqueName: \"kubernetes.io/projected/d55e2208-e68a-461a-873a-cb8503a7dfd1-kube-api-access-ncqjn\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929798 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4x8\" (UniqueName: \"kubernetes.io/projected/a3878e2f-db0c-4078-9400-ff01ebfb02c6-kube-api-access-cv4x8\") pod \"dns-operator-744455d44c-zln9d\" (UID: \"a3878e2f-db0c-4078-9400-ff01ebfb02c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929815 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-config\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929833 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bdd7236-3746-4764-8724-aab038391bea-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929851 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tf9v\" (UniqueName: \"kubernetes.io/projected/211c8e09-1aae-466b-8dba-daab4d60d3cd-kube-api-access-5tf9v\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929866 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bdd7236-3746-4764-8724-aab038391bea-metrics-tls\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929893 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-encryption-config\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929917 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929935 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-auth-proxy-config\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929950 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2vx4\" (UniqueName: \"kubernetes.io/projected/51c910e0-3160-484d-9b5c-7e606f3a1a8d-kube-api-access-j2vx4\") pod \"migrator-59844c95c7-nb4c4\" (UID: \"51c910e0-3160-484d-9b5c-7e606f3a1a8d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929965 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3878e2f-db0c-4078-9400-ff01ebfb02c6-metrics-tls\") pod \"dns-operator-744455d44c-zln9d\" (UID: \"a3878e2f-db0c-4078-9400-ff01ebfb02c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.929985 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-image-import-ca\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930002 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpnhs\" (UniqueName: \"kubernetes.io/projected/533e82fe-cb1a-462c-a5cb-097cce12524e-kube-api-access-tpnhs\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930020 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njcxn\" (UniqueName: \"kubernetes.io/projected/1bdd7236-3746-4764-8724-aab038391bea-kube-api-access-njcxn\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930036 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-audit\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930052 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930060 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-client-ca\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930078 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcq9q\" (UniqueName: \"kubernetes.io/projected/51e23993-d4b6-4929-b14c-c4a920b35760-kube-api-access-dcq9q\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930095 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnbc8\" (UniqueName: \"kubernetes.io/projected/815c7d43-e40d-4519-80ce-13df0e8d63ff-kube-api-access-nnbc8\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930101 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/211c8e09-1aae-466b-8dba-daab4d60d3cd-audit-dir\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930111 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e912d24-8387-4a52-97ee-cfdb927b58cf-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930126 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bdd7236-3746-4764-8724-aab038391bea-trusted-ca\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930141 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8d5m\" (UniqueName: \"kubernetes.io/projected/35c96e7e-52b3-46ed-a74a-f7a42a153a40-kube-api-access-n8d5m\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930165 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-machine-approver-tls\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930183 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930200 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a1fe7e-40d9-4b61-b28b-8e1714277767-config\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930218 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgljw\" (UniqueName: \"kubernetes.io/projected/0386aa95-c139-4357-8a92-c610b4b32709-kube-api-access-mgljw\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930238 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe89c478-f5be-4a2b-8a2c-2448dd1f778a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qhw8d\" (UID: \"fe89c478-f5be-4a2b-8a2c-2448dd1f778a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930257 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-etcd-client\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930273 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-srv-cert\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930292 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-metrics-certs\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930316 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-dir\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930330 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-service-ca\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930345 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-serving-cert\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930358 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z7cwj\" (UID: \"8b8a2380-9d06-4f11-9ce7-4ca7be32767e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930379 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930395 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9f4b\" (UniqueName: \"kubernetes.io/projected/07c961ab-73e0-4b79-9d5d-f181aa535bca-kube-api-access-p9f4b\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930411 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930426 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-policies\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930441 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-trusted-ca-bundle\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930458 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwff\" (UniqueName: \"kubernetes.io/projected/00649828-b271-4dfc-bcaa-e680c9c35a5a-kube-api-access-lrwff\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930511 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a1fe7e-40d9-4b61-b28b-8e1714277767-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930526 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-config\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930545 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930560 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930576 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15767381-d283-4c54-8c38-19d68dec9371-console-oauth-config\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.930600 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50000567-dad9-4bab-9db9-ecd69cf07609-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931181 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnxtv\" (UniqueName: \"kubernetes.io/projected/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-kube-api-access-bnxtv\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931216 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-serving-cert\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931245 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxtd8\" (UniqueName: \"kubernetes.io/projected/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-kube-api-access-fxtd8\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931460 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35c96e7e-52b3-46ed-a74a-f7a42a153a40-proxy-tls\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931508 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931559 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47334960-9820-4b3a-be9d-e01a0e4a39ba-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931582 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5rt\" (UniqueName: \"kubernetes.io/projected/fb954dc5-4c54-4bec-9044-b36dc67e4920-kube-api-access-tt5rt\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931610 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931631 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-client-ca\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931650 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96a1fe7e-40d9-4b61-b28b-8e1714277767-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931673 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/211c8e09-1aae-466b-8dba-daab4d60d3cd-node-pullsecrets\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931706 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0386aa95-c139-4357-8a92-c610b4b32709-config\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931727 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6514c538-c59e-4743-97a6-3c11d74fa12e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931745 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjjt\" (UniqueName: \"kubernetes.io/projected/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-kube-api-access-9xjjt\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931767 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-default-certificate\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931787 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb954dc5-4c54-4bec-9044-b36dc67e4920-signing-cabundle\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931816 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50000567-dad9-4bab-9db9-ecd69cf07609-config\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931835 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931853 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-config\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931872 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-images\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931881 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-audit\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931894 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-52wdw\" (UID: \"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931965 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931981 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.932041 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.932264 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c961ab-73e0-4b79-9d5d-f181aa535bca-service-ca-bundle\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.932765 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0386aa95-c139-4357-8a92-c610b4b32709-config\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.932832 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6514c538-c59e-4743-97a6-3c11d74fa12e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.932920 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-auth-proxy-config\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.931612 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-image-import-ca\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.933244 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/211c8e09-1aae-466b-8dba-daab4d60d3cd-node-pullsecrets\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.933888 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-client-ca\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.934124 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/211c8e09-1aae-466b-8dba-daab4d60d3cd-config\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.934753 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.934801 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.935020 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50000567-dad9-4bab-9db9-ecd69cf07609-config\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.935275 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.935459 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0386aa95-c139-4357-8a92-c610b4b32709-trusted-ca\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.935731 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386aa95-c139-4357-8a92-c610b4b32709-serving-cert\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.936224 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6514c538-c59e-4743-97a6-3c11d74fa12e-serving-cert\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.936281 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-dir\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.936434 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.936845 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-serving-cert\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.936949 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e912d24-8387-4a52-97ee-cfdb927b58cf-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.937097 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.937124 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.937145 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-stats-auth\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.937170 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.937501 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-policies\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.937550 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-serving-cert\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.937592 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-machine-approver-tls\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.937850 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50000567-dad9-4bab-9db9-ecd69cf07609-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.938318 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.938847 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e912d24-8387-4a52-97ee-cfdb927b58cf-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.938957 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-etcd-client\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.939486 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-default-certificate\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.939505 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07c961ab-73e0-4b79-9d5d-f181aa535bca-metrics-certs\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.939861 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.940745 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.940757 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.941085 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.941799 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.943487 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.949592 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-config\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.954362 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bdd7236-3746-4764-8724-aab038391bea-metrics-tls\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.954829 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/211c8e09-1aae-466b-8dba-daab4d60d3cd-encryption-config\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.963433 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.987530 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 19:37:55 crc kubenswrapper[4941]: I0227 19:37:55.995091 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bdd7236-3746-4764-8724-aab038391bea-trusted-ca\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.004290 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.022789 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033388 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033422 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3878e2f-db0c-4078-9400-ff01ebfb02c6-metrics-tls\") pod \"dns-operator-744455d44c-zln9d\" (UID: \"a3878e2f-db0c-4078-9400-ff01ebfb02c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033456 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpnhs\" (UniqueName: \"kubernetes.io/projected/533e82fe-cb1a-462c-a5cb-097cce12524e-kube-api-access-tpnhs\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033493 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-client-ca\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033517 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcq9q\" (UniqueName: \"kubernetes.io/projected/51e23993-d4b6-4929-b14c-c4a920b35760-kube-api-access-dcq9q\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033534 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8d5m\" (UniqueName: \"kubernetes.io/projected/35c96e7e-52b3-46ed-a74a-f7a42a153a40-kube-api-access-n8d5m\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033550 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a1fe7e-40d9-4b61-b28b-8e1714277767-config\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033593 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe89c478-f5be-4a2b-8a2c-2448dd1f778a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qhw8d\" (UID: \"fe89c478-f5be-4a2b-8a2c-2448dd1f778a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033614 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-srv-cert\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033657 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-service-ca\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033673 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-serving-cert\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033688 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z7cwj\" (UID: \"8b8a2380-9d06-4f11-9ce7-4ca7be32767e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033726 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-trusted-ca-bundle\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033743 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrwff\" (UniqueName: \"kubernetes.io/projected/00649828-b271-4dfc-bcaa-e680c9c35a5a-kube-api-access-lrwff\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033822 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a1fe7e-40d9-4b61-b28b-8e1714277767-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033839 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-config\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033929 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15767381-d283-4c54-8c38-19d68dec9371-console-oauth-config\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033955 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxtd8\" (UniqueName: \"kubernetes.io/projected/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-kube-api-access-fxtd8\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.033985 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35c96e7e-52b3-46ed-a74a-f7a42a153a40-proxy-tls\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034006 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96a1fe7e-40d9-4b61-b28b-8e1714277767-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034022 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47334960-9820-4b3a-be9d-e01a0e4a39ba-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034054 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5rt\" (UniqueName: \"kubernetes.io/projected/fb954dc5-4c54-4bec-9044-b36dc67e4920-kube-api-access-tt5rt\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034084 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb954dc5-4c54-4bec-9044-b36dc67e4920-signing-cabundle\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034114 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xjjt\" (UniqueName: \"kubernetes.io/projected/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-kube-api-access-9xjjt\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034146 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-images\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034210 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-52wdw\" (UID: \"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034244 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88a3699f-6925-4273-a865-2070e5c8cb98-webhook-cert\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034259 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533e82fe-cb1a-462c-a5cb-097cce12524e-secret-volume\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034275 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034289 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb954dc5-4c54-4bec-9044-b36dc67e4920-signing-key\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.034997 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.035083 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.035106 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/88a3699f-6925-4273-a865-2070e5c8cb98-tmpfs\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.035937 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.035944 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/88a3699f-6925-4273-a865-2070e5c8cb98-tmpfs\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.035535 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-proxy-tls\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036040 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-console-config\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036068 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e23993-d4b6-4929-b14c-c4a920b35760-serving-cert\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036106 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-oauth-serving-cert\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036131 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtpw9\" (UniqueName: \"kubernetes.io/projected/8116add2-08c1-41a0-8868-049cacc07ae0-kube-api-access-gtpw9\") pod \"ingress-canary-52mbw\" (UID: \"8116add2-08c1-41a0-8868-049cacc07ae0\") " pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036154 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88a3699f-6925-4273-a865-2070e5c8cb98-apiservice-cert\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036176 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d55e2208-e68a-461a-873a-cb8503a7dfd1-srv-cert\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036198 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k48nt\" (UniqueName: \"kubernetes.io/projected/fe89c478-f5be-4a2b-8a2c-2448dd1f778a-kube-api-access-k48nt\") pod \"package-server-manager-789f6589d5-qhw8d\" (UID: \"fe89c478-f5be-4a2b-8a2c-2448dd1f778a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036291 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkwbh\" (UniqueName: \"kubernetes.io/projected/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-kube-api-access-vkwbh\") pod \"multus-admission-controller-857f4d67dd-z7cwj\" (UID: \"8b8a2380-9d06-4f11-9ce7-4ca7be32767e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036317 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmpw\" (UniqueName: \"kubernetes.io/projected/15767381-d283-4c54-8c38-19d68dec9371-kube-api-access-wlmpw\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036367 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036386 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15767381-d283-4c54-8c38-19d68dec9371-console-serving-cert\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036513 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d55e2208-e68a-461a-873a-cb8503a7dfd1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036535 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47334960-9820-4b3a-be9d-e01a0e4a39ba-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036554 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-config\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036585 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8f6\" (UniqueName: \"kubernetes.io/projected/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-kube-api-access-gf8f6\") pod \"control-plane-machine-set-operator-78cbb6b69f-52wdw\" (UID: \"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036614 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gfgw\" (UniqueName: \"kubernetes.io/projected/88a3699f-6925-4273-a865-2070e5c8cb98-kube-api-access-2gfgw\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036630 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8116add2-08c1-41a0-8868-049cacc07ae0-cert\") pod \"ingress-canary-52mbw\" (UID: \"8116add2-08c1-41a0-8868-049cacc07ae0\") " pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036648 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbr75\" (UniqueName: \"kubernetes.io/projected/47334960-9820-4b3a-be9d-e01a0e4a39ba-kube-api-access-sbr75\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036668 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqjn\" (UniqueName: \"kubernetes.io/projected/d55e2208-e68a-461a-873a-cb8503a7dfd1-kube-api-access-ncqjn\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.036685 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4x8\" (UniqueName: \"kubernetes.io/projected/a3878e2f-db0c-4078-9400-ff01ebfb02c6-kube-api-access-cv4x8\") pod \"dns-operator-744455d44c-zln9d\" (UID: \"a3878e2f-db0c-4078-9400-ff01ebfb02c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.063439 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.083241 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.103435 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.123008 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.143145 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.168730 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.183355 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.204147 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.222867 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.242887 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.249914 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e23993-d4b6-4929-b14c-c4a920b35760-serving-cert\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.269211 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.275071 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-client-ca\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.292860 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.302797 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.304920 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-config\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.324683 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.343387 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.349271 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a1fe7e-40d9-4b61-b28b-8e1714277767-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.363896 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.374443 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a1fe7e-40d9-4b61-b28b-8e1714277767-config\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.383318 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.403290 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.423821 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.442881 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.463641 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.483707 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.503186 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.509073 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3878e2f-db0c-4078-9400-ff01ebfb02c6-metrics-tls\") pod \"dns-operator-744455d44c-zln9d\" (UID: \"a3878e2f-db0c-4078-9400-ff01ebfb02c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.538198 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.542846 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.550701 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-oauth-serving-cert\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.564809 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.569402 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15767381-d283-4c54-8c38-19d68dec9371-console-serving-cert\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.583713 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.587978 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15767381-d283-4c54-8c38-19d68dec9371-console-oauth-config\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.604017 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.623529 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.627641 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-console-config\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.643332 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.645540 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-service-ca\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.668273 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.675817 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15767381-d283-4c54-8c38-19d68dec9371-trusted-ca-bundle\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.683166 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.703218 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.708241 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47334960-9820-4b3a-be9d-e01a0e4a39ba-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.723883 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.727709 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47334960-9820-4b3a-be9d-e01a0e4a39ba-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.743008 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.765015 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.784271 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.789976 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d55e2208-e68a-461a-873a-cb8503a7dfd1-srv-cert\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.803791 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.821612 4941 request.go:700] Waited for 1.012206258s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.823162 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.845331 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.851279 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.852115 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d55e2208-e68a-461a-873a-cb8503a7dfd1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.852625 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533e82fe-cb1a-462c-a5cb-097cce12524e-secret-volume\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.863848 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.883715 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.887157 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe89c478-f5be-4a2b-8a2c-2448dd1f778a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qhw8d\" (UID: \"fe89c478-f5be-4a2b-8a2c-2448dd1f778a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.903917 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.923747 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.930161 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-proxy-tls\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.943631 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.947952 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88a3699f-6925-4273-a865-2070e5c8cb98-webhook-cert\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.950354 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88a3699f-6925-4273-a865-2070e5c8cb98-apiservice-cert\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.963336 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.983607 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 19:37:56 crc kubenswrapper[4941]: I0227 19:37:56.985662 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb954dc5-4c54-4bec-9044-b36dc67e4920-signing-cabundle\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.004005 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.025100 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.032570 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb954dc5-4c54-4bec-9044-b36dc67e4920-signing-key\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034204 4941 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034303 4941 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034420 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-srv-cert podName:00649828-b271-4dfc-bcaa-e680c9c35a5a nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.534383274 +0000 UTC m=+195.795523734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-srv-cert") pod "catalog-operator-68c6474976-94d5q" (UID: "00649828-b271-4dfc-bcaa-e680c9c35a5a") : failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034462 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-webhook-certs podName:8b8a2380-9d06-4f11-9ce7-4ca7be32767e nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.534446855 +0000 UTC m=+195.795587305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-webhook-certs") pod "multus-admission-controller-857f4d67dd-z7cwj" (UID: "8b8a2380-9d06-4f11-9ce7-4ca7be32767e") : failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034280 4941 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034579 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-serving-cert podName:53dffcc9-85c5-4742-98f9-4ffb32ad20f6 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.534565139 +0000 UTC m=+195.795705589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-serving-cert") pod "service-ca-operator-777779d784-v9sth" (UID: "53dffcc9-85c5-4742-98f9-4ffb32ad20f6") : failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034625 4941 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034675 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-control-plane-machine-set-operator-tls podName:b70f7996-0cfa-4eb2-896e-49fdaaf5c07a nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.534662171 +0000 UTC m=+195.795802831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-52wdw" (UID: "b70f7996-0cfa-4eb2-896e-49fdaaf5c07a") : failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034711 4941 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034750 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35c96e7e-52b3-46ed-a74a-f7a42a153a40-proxy-tls podName:35c96e7e-52b3-46ed-a74a-f7a42a153a40 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.534739284 +0000 UTC m=+195.795879744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/35c96e7e-52b3-46ed-a74a-f7a42a153a40-proxy-tls") pod "machine-config-operator-74547568cd-c9q44" (UID: "35c96e7e-52b3-46ed-a74a-f7a42a153a40") : failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034816 4941 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.034859 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-images podName:35c96e7e-52b3-46ed-a74a-f7a42a153a40 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.534847167 +0000 UTC m=+195.795987627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-images") pod "machine-config-operator-74547568cd-c9q44" (UID: "35c96e7e-52b3-46ed-a74a-f7a42a153a40") : failed to sync configmap cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.037215 4941 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.037296 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-config podName:53dffcc9-85c5-4742-98f9-4ffb32ad20f6 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.537278325 +0000 UTC m=+195.798418785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-config") pod "service-ca-operator-777779d784-v9sth" (UID: "53dffcc9-85c5-4742-98f9-4ffb32ad20f6") : failed to sync configmap cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.037231 4941 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.037361 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume podName:533e82fe-cb1a-462c-a5cb-097cce12524e nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.537349217 +0000 UTC m=+195.798489667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume") pod "collect-profiles-29537010-nkxd5" (UID: "533e82fe-cb1a-462c-a5cb-097cce12524e") : failed to sync configmap cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.037397 4941 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: E0227 19:37:57.037435 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8116add2-08c1-41a0-8868-049cacc07ae0-cert podName:8116add2-08c1-41a0-8868-049cacc07ae0 nodeName:}" failed. No retries permitted until 2026-02-27 19:37:57.537424929 +0000 UTC m=+195.798565379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8116add2-08c1-41a0-8868-049cacc07ae0-cert") pod "ingress-canary-52mbw" (UID: "8116add2-08c1-41a0-8868-049cacc07ae0") : failed to sync secret cache: timed out waiting for the condition Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.043776 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.064308 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.084316 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.104124 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.124158 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.144954 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.165043 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.185564 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.204040 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.223750 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.244124 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.264321 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.283764 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.304708 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.324212 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.343485 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.364885 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.384588 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.413951 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.423416 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.445368 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.463942 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.484167 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.504857 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.524980 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.544691 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.565227 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.570462 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.570528 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-config\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.571623 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.571702 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8116add2-08c1-41a0-8868-049cacc07ae0-cert\") pod \"ingress-canary-52mbw\" (UID: \"8116add2-08c1-41a0-8868-049cacc07ae0\") " pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.572361 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-srv-cert\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.572448 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-serving-cert\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.572435 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-config\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.572495 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z7cwj\" (UID: \"8b8a2380-9d06-4f11-9ce7-4ca7be32767e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.572871 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35c96e7e-52b3-46ed-a74a-f7a42a153a40-proxy-tls\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.573120 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-images\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.573185 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-52wdw\" (UID: \"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.574011 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35c96e7e-52b3-46ed-a74a-f7a42a153a40-images\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.576427 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35c96e7e-52b3-46ed-a74a-f7a42a153a40-proxy-tls\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.576940 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z7cwj\" (UID: \"8b8a2380-9d06-4f11-9ce7-4ca7be32767e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.577098 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/00649828-b271-4dfc-bcaa-e680c9c35a5a-srv-cert\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.577706 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8116add2-08c1-41a0-8868-049cacc07ae0-cert\") pod \"ingress-canary-52mbw\" (UID: \"8116add2-08c1-41a0-8868-049cacc07ae0\") " pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.577961 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-serving-cert\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.578715 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-52wdw\" (UID: \"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.585068 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.620120 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnsb5\" (UniqueName: \"kubernetes.io/projected/9c1a6007-ff5e-4d9d-8584-0e9c4048978b-kube-api-access-qnsb5\") pod \"etcd-operator-b45778765-68mk8\" (UID: \"9c1a6007-ff5e-4d9d-8584-0e9c4048978b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.640746 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gknxx\" (UniqueName: \"kubernetes.io/projected/3db07a98-f6f1-4aa8-9ca9-1989dfc61f04-kube-api-access-gknxx\") pod \"machine-api-operator-5694c8668f-9s2rr\" (UID: \"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.664286 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcszj\" (UniqueName: \"kubernetes.io/projected/368c3d3a-9216-40ce-a0b7-17d490873b85-kube-api-access-dcszj\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.677229 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4mvx\" (UniqueName: \"kubernetes.io/projected/4fb507a2-c21f-457c-9677-2afefb9a4d8c-kube-api-access-h4mvx\") pod \"openshift-apiserver-operator-796bbdcf4f-5krkp\" (UID: \"4fb507a2-c21f-457c-9677-2afefb9a4d8c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.697572 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/368c3d3a-9216-40ce-a0b7-17d490873b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m4whq\" (UID: \"368c3d3a-9216-40ce-a0b7-17d490873b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.716462 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mwd\" (UniqueName: \"kubernetes.io/projected/7ae7c46e-c974-471c-8f96-1dc0fd38e49d-kube-api-access-b4mwd\") pod \"downloads-7954f5f757-w7qqt\" (UID: \"7ae7c46e-c974-471c-8f96-1dc0fd38e49d\") " pod="openshift-console/downloads-7954f5f757-w7qqt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.738734 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m68jb\" (UniqueName: \"kubernetes.io/projected/297422d4-75d1-4b5e-a106-408b239e43c0-kube-api-access-m68jb\") pod \"cluster-samples-operator-665b6dd947-s8m66\" (UID: \"297422d4-75d1-4b5e-a106-408b239e43c0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.756543 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc287\" (UniqueName: \"kubernetes.io/projected/0c252274-a47f-4da8-b561-bbc47afaa507-kube-api-access-zc287\") pod \"apiserver-7bbb656c7d-vrhrl\" (UID: \"0c252274-a47f-4da8-b561-bbc47afaa507\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.774338 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.784225 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.797125 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.804418 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.816689 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.822079 4941 request.go:700] Waited for 1.941564308s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.824601 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.846183 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.846918 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.865966 4941 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.868842 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.886054 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w7qqt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.886103 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.893588 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.921902 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e912d24-8387-4a52-97ee-cfdb927b58cf-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-twcv6\" (UID: \"7e912d24-8387-4a52-97ee-cfdb927b58cf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.959202 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvf9\" (UniqueName: \"kubernetes.io/projected/6514c538-c59e-4743-97a6-3c11d74fa12e-kube-api-access-jcvf9\") pod \"openshift-config-operator-7777fb866f-xw8nw\" (UID: \"6514c538-c59e-4743-97a6-3c11d74fa12e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.960882 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50000567-dad9-4bab-9db9-ecd69cf07609-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2wswg\" (UID: \"50000567-dad9-4bab-9db9-ecd69cf07609\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:57 crc kubenswrapper[4941]: I0227 19:37:57.982801 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htxdc\" (UniqueName: \"kubernetes.io/projected/a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa-kube-api-access-htxdc\") pod \"machine-approver-56656f9798-r8fcm\" (UID: \"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.000442 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.010138 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tf9v\" (UniqueName: \"kubernetes.io/projected/211c8e09-1aae-466b-8dba-daab4d60d3cd-kube-api-access-5tf9v\") pod \"apiserver-76f77b778f-9tm22\" (UID: \"211c8e09-1aae-466b-8dba-daab4d60d3cd\") " pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.014641 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.027216 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnxtv\" (UniqueName: \"kubernetes.io/projected/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-kube-api-access-bnxtv\") pod \"controller-manager-879f6c89f-xdq5q\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.040338 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njcxn\" (UniqueName: \"kubernetes.io/projected/1bdd7236-3746-4764-8724-aab038391bea-kube-api-access-njcxn\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.058448 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjgdc\" (UniqueName: \"kubernetes.io/projected/ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210-kube-api-access-zjgdc\") pod \"openshift-controller-manager-operator-756b6f6bc6-bmrxx\" (UID: \"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.081363 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgljw\" (UniqueName: \"kubernetes.io/projected/0386aa95-c139-4357-8a92-c610b4b32709-kube-api-access-mgljw\") pod \"console-operator-58897d9998-kptkw\" (UID: \"0386aa95-c139-4357-8a92-c610b4b32709\") " pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.084121 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.095010 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.102966 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bdd7236-3746-4764-8724-aab038391bea-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ll96c\" (UID: \"1bdd7236-3746-4764-8724-aab038391bea\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.116059 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.126659 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2vx4\" (UniqueName: \"kubernetes.io/projected/51c910e0-3160-484d-9b5c-7e606f3a1a8d-kube-api-access-j2vx4\") pod \"migrator-59844c95c7-nb4c4\" (UID: \"51c910e0-3160-484d-9b5c-7e606f3a1a8d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.139935 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9f4b\" (UniqueName: \"kubernetes.io/projected/07c961ab-73e0-4b79-9d5d-f181aa535bca-kube-api-access-p9f4b\") pod \"router-default-5444994796-lx7mk\" (UID: \"07c961ab-73e0-4b79-9d5d-f181aa535bca\") " pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.165517 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.181186 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnbc8\" (UniqueName: \"kubernetes.io/projected/815c7d43-e40d-4519-80ce-13df0e8d63ff-kube-api-access-nnbc8\") pod \"oauth-openshift-558db77b4-b9bp4\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.185066 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcq9q\" (UniqueName: \"kubernetes.io/projected/51e23993-d4b6-4929-b14c-c4a920b35760-kube-api-access-dcq9q\") pod \"route-controller-manager-6576b87f9c-c877p\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.199301 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpnhs\" (UniqueName: \"kubernetes.io/projected/533e82fe-cb1a-462c-a5cb-097cce12524e-kube-api-access-tpnhs\") pod \"collect-profiles-29537010-nkxd5\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.221129 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8d5m\" (UniqueName: \"kubernetes.io/projected/35c96e7e-52b3-46ed-a74a-f7a42a153a40-kube-api-access-n8d5m\") pod \"machine-config-operator-74547568cd-c9q44\" (UID: \"35c96e7e-52b3-46ed-a74a-f7a42a153a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.225027 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.232211 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.241218 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.250671 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w7qqt"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.254621 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrwff\" (UniqueName: \"kubernetes.io/projected/00649828-b271-4dfc-bcaa-e680c9c35a5a-kube-api-access-lrwff\") pod \"catalog-operator-68c6474976-94d5q\" (UID: \"00649828-b271-4dfc-bcaa-e680c9c35a5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.260502 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.264313 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxtd8\" (UniqueName: \"kubernetes.io/projected/be0c9fa2-2973-45e6-ad4a-4202d6b18a24-kube-api-access-fxtd8\") pod \"machine-config-controller-84d6567774-drm7j\" (UID: \"be0c9fa2-2973-45e6-ad4a-4202d6b18a24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.265285 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:37:58 crc kubenswrapper[4941]: W0227 19:37:58.289027 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ae7c46e_c974_471c_8f96_1dc0fd38e49d.slice/crio-61b48d24482ce750717c0c9bb75f87c952d0e2364f84c96e9ad53dc64d2c3b1c WatchSource:0}: Error finding container 61b48d24482ce750717c0c9bb75f87c952d0e2364f84c96e9ad53dc64d2c3b1c: Status 404 returned error can't find the container with id 61b48d24482ce750717c0c9bb75f87c952d0e2364f84c96e9ad53dc64d2c3b1c Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.292962 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.300400 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96a1fe7e-40d9-4b61-b28b-8e1714277767-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rvzxg\" (UID: \"96a1fe7e-40d9-4b61-b28b-8e1714277767\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.304912 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5rt\" (UniqueName: \"kubernetes.io/projected/fb954dc5-4c54-4bec-9044-b36dc67e4920-kube-api-access-tt5rt\") pod \"service-ca-9c57cc56f-kvkk5\" (UID: \"fb954dc5-4c54-4bec-9044-b36dc67e4920\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.305754 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.306250 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.332096 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.333018 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.334380 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xjjt\" (UniqueName: \"kubernetes.io/projected/53dffcc9-85c5-4742-98f9-4ffb32ad20f6-kube-api-access-9xjjt\") pod \"service-ca-operator-777779d784-v9sth\" (UID: \"53dffcc9-85c5-4742-98f9-4ffb32ad20f6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.335422 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.346359 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" event={"ID":"4fb507a2-c21f-457c-9677-2afefb9a4d8c","Type":"ContainerStarted","Data":"90a573f6651531b06a81e7055e89653ad5aa7ccb179699fc433a298ec65e71b1"} Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.350012 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w7qqt" event={"ID":"7ae7c46e-c974-471c-8f96-1dc0fd38e49d","Type":"ContainerStarted","Data":"61b48d24482ce750717c0c9bb75f87c952d0e2364f84c96e9ad53dc64d2c3b1c"} Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.350882 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" event={"ID":"368c3d3a-9216-40ce-a0b7-17d490873b85","Type":"ContainerStarted","Data":"2f59c7ffa37bc3f05ed15e404d2850718c012535f0cf07ee730e54daf194a555"} Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.352172 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtpw9\" (UniqueName: \"kubernetes.io/projected/8116add2-08c1-41a0-8868-049cacc07ae0-kube-api-access-gtpw9\") pod \"ingress-canary-52mbw\" (UID: \"8116add2-08c1-41a0-8868-049cacc07ae0\") " pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.360894 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" event={"ID":"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa","Type":"ContainerStarted","Data":"eff39a817dfe5a689b8c6bb3fa664cd7879a16b48e84b20dd950736126dc1623"} Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.364943 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkwbh\" (UniqueName: \"kubernetes.io/projected/8b8a2380-9d06-4f11-9ce7-4ca7be32767e-kube-api-access-vkwbh\") pod \"multus-admission-controller-857f4d67dd-z7cwj\" (UID: \"8b8a2380-9d06-4f11-9ce7-4ca7be32767e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.389232 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmpw\" (UniqueName: \"kubernetes.io/projected/15767381-d283-4c54-8c38-19d68dec9371-kube-api-access-wlmpw\") pod \"console-f9d7485db-fpgvv\" (UID: \"15767381-d283-4c54-8c38-19d68dec9371\") " pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.399317 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.404572 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k48nt\" (UniqueName: \"kubernetes.io/projected/fe89c478-f5be-4a2b-8a2c-2448dd1f778a-kube-api-access-k48nt\") pod \"package-server-manager-789f6589d5-qhw8d\" (UID: \"fe89c478-f5be-4a2b-8a2c-2448dd1f778a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.414718 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-68mk8"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.414914 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.416117 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.422954 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.425345 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.426310 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.432900 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8f6\" (UniqueName: \"kubernetes.io/projected/b70f7996-0cfa-4eb2-896e-49fdaaf5c07a-kube-api-access-gf8f6\") pod \"control-plane-machine-set-operator-78cbb6b69f-52wdw\" (UID: \"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.441702 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gfgw\" (UniqueName: \"kubernetes.io/projected/88a3699f-6925-4273-a865-2070e5c8cb98-kube-api-access-2gfgw\") pod \"packageserver-d55dfcdfc-t6sr2\" (UID: \"88a3699f-6925-4273-a865-2070e5c8cb98\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.457746 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9s2rr"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.461309 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqjn\" (UniqueName: \"kubernetes.io/projected/d55e2208-e68a-461a-873a-cb8503a7dfd1-kube-api-access-ncqjn\") pod \"olm-operator-6b444d44fb-qv78l\" (UID: \"d55e2208-e68a-461a-873a-cb8503a7dfd1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.461842 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.479863 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.484680 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.487333 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbr75\" (UniqueName: \"kubernetes.io/projected/47334960-9820-4b3a-be9d-e01a0e4a39ba-kube-api-access-sbr75\") pod \"kube-storage-version-migrator-operator-b67b599dd-hphdl\" (UID: \"47334960-9820-4b3a-be9d-e01a0e4a39ba\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.492678 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.500535 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4x8\" (UniqueName: \"kubernetes.io/projected/a3878e2f-db0c-4078-9400-ff01ebfb02c6-kube-api-access-cv4x8\") pod \"dns-operator-744455d44c-zln9d\" (UID: \"a3878e2f-db0c-4078-9400-ff01ebfb02c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.501954 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.515853 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.524850 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.535441 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.543419 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.543781 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdq5q"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.557341 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.584670 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-52mbw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.594830 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.594866 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.594884 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-config\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.594901 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-trusted-ca\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.594916 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21bfb754-9e4f-424a-879f-bc9e3d7dd163-serving-cert\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.594930 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgvjz\" (UniqueName: \"kubernetes.io/projected/21bfb754-9e4f-424a-879f-bc9e3d7dd163-kube-api-access-cgvjz\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.594949 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7ds\" (UniqueName: \"kubernetes.io/projected/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-kube-api-access-nc7ds\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.594970 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595001 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-node-bootstrap-token\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595026 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-certificates\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595042 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-bound-sa-token\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595057 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-certs\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595072 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbsl\" (UniqueName: \"kubernetes.io/projected/97e2661c-8124-4c95-a2c4-deb0e07cb14f-kube-api-access-rfbsl\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595167 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595182 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-service-ca-bundle\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595197 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnkqw\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-kube-api-access-pnkqw\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595219 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595260 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.595278 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-tls\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: E0227 19:37:58.597380 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.097369373 +0000 UTC m=+197.358509793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696058 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696184 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp2zz\" (UniqueName: \"kubernetes.io/projected/aec473dd-8e2a-4af0-98fb-95e442141a92-kube-api-access-jp2zz\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696227 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-mountpoint-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696274 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9dk6\" (UniqueName: \"kubernetes.io/projected/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-kube-api-access-c9dk6\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696294 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696358 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-tls\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696638 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696682 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696706 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-config\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: E0227 19:37:58.696781 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.196735229 +0000 UTC m=+197.457875689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696819 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-trusted-ca\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696853 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-socket-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696896 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21bfb754-9e4f-424a-879f-bc9e3d7dd163-serving-cert\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.696923 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgvjz\" (UniqueName: \"kubernetes.io/projected/21bfb754-9e4f-424a-879f-bc9e3d7dd163-kube-api-access-cgvjz\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697002 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-metrics-tls\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697112 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc7ds\" (UniqueName: \"kubernetes.io/projected/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-kube-api-access-nc7ds\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697190 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697245 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-node-bootstrap-token\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697364 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-certificates\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697393 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-bound-sa-token\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697418 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-certs\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697458 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbsl\" (UniqueName: \"kubernetes.io/projected/97e2661c-8124-4c95-a2c4-deb0e07cb14f-kube-api-access-rfbsl\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697515 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-plugins-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697826 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-config-volume\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.697973 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.698026 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-service-ca-bundle\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.698051 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-csi-data-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.698076 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnkqw\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-kube-api-access-pnkqw\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.698150 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-registration-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.698193 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.698531 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.698864 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-trusted-ca\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.701571 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.701686 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-service-ca-bundle\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: E0227 19:37:58.702550 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.202537122 +0000 UTC m=+197.463677632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.705699 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.706126 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21bfb754-9e4f-424a-879f-bc9e3d7dd163-config\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.709139 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21bfb754-9e4f-424a-879f-bc9e3d7dd163-serving-cert\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.709609 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-certs\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.710034 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-node-bootstrap-token\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.710749 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-certificates\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.711970 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.714021 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.715915 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-tls\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.739334 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.745421 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgvjz\" (UniqueName: \"kubernetes.io/projected/21bfb754-9e4f-424a-879f-bc9e3d7dd163-kube-api-access-cgvjz\") pod \"authentication-operator-69f744f599-5pbpw\" (UID: \"21bfb754-9e4f-424a-879f-bc9e3d7dd163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.759088 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.764002 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbsl\" (UniqueName: \"kubernetes.io/projected/97e2661c-8124-4c95-a2c4-deb0e07cb14f-kube-api-access-rfbsl\") pod \"marketplace-operator-79b997595-xzs75\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.768331 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.786258 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9tm22"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.790904 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b9bp4"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.790936 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc7ds\" (UniqueName: \"kubernetes.io/projected/54f9f137-2092-4b45-bbb6-4cceb6ef3cb5-kube-api-access-nc7ds\") pod \"machine-config-server-gndt2\" (UID: \"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5\") " pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.791857 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806200 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:58 crc kubenswrapper[4941]: E0227 19:37:58.806395 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.306368224 +0000 UTC m=+197.567508644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806452 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-plugins-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806534 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-config-volume\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806571 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-csi-data-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806598 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-registration-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806624 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806649 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp2zz\" (UniqueName: \"kubernetes.io/projected/aec473dd-8e2a-4af0-98fb-95e442141a92-kube-api-access-jp2zz\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806669 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-mountpoint-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806690 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9dk6\" (UniqueName: \"kubernetes.io/projected/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-kube-api-access-c9dk6\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806722 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-csi-data-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.806768 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-plugins-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.807291 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-config-volume\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.807536 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-mountpoint-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.807655 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-bound-sa-token\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.807724 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-registration-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: E0227 19:37:58.807844 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.307828915 +0000 UTC m=+197.568969405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.807875 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-socket-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.807911 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-metrics-tls\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.808242 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aec473dd-8e2a-4af0-98fb-95e442141a92-socket-dir\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.811629 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-metrics-tls\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.824387 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnkqw\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-kube-api-access-pnkqw\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.872990 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.876657 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.878659 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp2zz\" (UniqueName: \"kubernetes.io/projected/aec473dd-8e2a-4af0-98fb-95e442141a92-kube-api-access-jp2zz\") pod \"csi-hostpathplugin-w969m\" (UID: \"aec473dd-8e2a-4af0-98fb-95e442141a92\") " pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.894285 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gndt2" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.895810 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9dk6\" (UniqueName: \"kubernetes.io/projected/e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca-kube-api-access-c9dk6\") pod \"dns-default-knckv\" (UID: \"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca\") " pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.901310 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-knckv" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.904993 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5"] Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.912753 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:58 crc kubenswrapper[4941]: E0227 19:37:58.913064 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.413047366 +0000 UTC m=+197.674187786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.923745 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w969m" Feb 27 19:37:58 crc kubenswrapper[4941]: I0227 19:37:58.984206 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.006704 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.016806 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.017315 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.017800 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.517780042 +0000 UTC m=+197.778920462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: W0227 19:37:59.026706 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c910e0_3160_484d_9b5c_7e606f3a1a8d.slice/crio-c87bdced1fbc3f5bd6af117ab6c294fa9aedd51266d1d195f7cb22f1bc17c496 WatchSource:0}: Error finding container c87bdced1fbc3f5bd6af117ab6c294fa9aedd51266d1d195f7cb22f1bc17c496: Status 404 returned error can't find the container with id c87bdced1fbc3f5bd6af117ab6c294fa9aedd51266d1d195f7cb22f1bc17c496 Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.026850 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kptkw"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.043438 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.120212 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.120830 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.6207911 +0000 UTC m=+197.881931520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: W0227 19:37:59.219919 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51e23993_d4b6_4929_b14c_c4a920b35760.slice/crio-e2db4964dff73a389170ce6517ab7e83c1ae77da15ebfec8228b3039d26ef7d4 WatchSource:0}: Error finding container e2db4964dff73a389170ce6517ab7e83c1ae77da15ebfec8228b3039d26ef7d4: Status 404 returned error can't find the container with id e2db4964dff73a389170ce6517ab7e83c1ae77da15ebfec8228b3039d26ef7d4 Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.223096 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.223411 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.723400528 +0000 UTC m=+197.984540948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.327763 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.328081 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.828065373 +0000 UTC m=+198.089205793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.343252 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.401089 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kptkw" event={"ID":"0386aa95-c139-4357-8a92-c610b4b32709","Type":"ContainerStarted","Data":"e182fe735cde1ac87c01f073d15c20793d818a248afd484571fb681c41e4f880"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.406881 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" event={"ID":"297422d4-75d1-4b5e-a106-408b239e43c0","Type":"ContainerStarted","Data":"44aead89e7b7e5a32865309a0e12c7439f0db71eae3c3c9a779d3c048f0ea5fd"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.409358 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lx7mk" event={"ID":"07c961ab-73e0-4b79-9d5d-f181aa535bca","Type":"ContainerStarted","Data":"5195bce9c2ffcbb7bf3cc951a2222c23af4dd7462a8e785c5c8639bf8f91bb05"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.409435 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lx7mk" event={"ID":"07c961ab-73e0-4b79-9d5d-f181aa535bca","Type":"ContainerStarted","Data":"e8fd349b4bf874839e0fb07ca3289629ad6443be94474aea91dde40a68fb2676"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.411112 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" event={"ID":"815c7d43-e40d-4519-80ce-13df0e8d63ff","Type":"ContainerStarted","Data":"6891b73d889729455a2d5db4f07521093ccf290dbfae6b5f892f8b4ba3c13839"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.412422 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" event={"ID":"211c8e09-1aae-466b-8dba-daab4d60d3cd","Type":"ContainerStarted","Data":"3f114d13ca040c54997be678a3f333535affc4c1372d96d36725a6d994c55a5b"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.413773 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" event={"ID":"9c1a6007-ff5e-4d9d-8584-0e9c4048978b","Type":"ContainerStarted","Data":"350d2bfe59173f15302b2febfd19ba50eba1610d536f7488d2b871aaac0f19b6"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.416386 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" event={"ID":"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210","Type":"ContainerStarted","Data":"57dacbcced5d7a19af06e51ce8712153e3e371688ac53a4704a59f6fddcfb300"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.417817 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" event={"ID":"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa","Type":"ContainerStarted","Data":"88db337ebe9e838215da377f788dda813da48ffdb698c217abf8e7a3265c5d54"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.419477 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" event={"ID":"a380f8cf-f0a1-41d0-ac65-4f664f543f4d","Type":"ContainerStarted","Data":"c8d9041e18f0f15b1a143759f6fed0df929df0ffaf10d64dcbe88edc5b5a174b"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.420436 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" event={"ID":"0c252274-a47f-4da8-b561-bbc47afaa507","Type":"ContainerStarted","Data":"a35716ab8334071a8fb9ca8c0ffa11f3a4a232a3cb69763c070b5eb2dbeec7f0"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.421275 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" event={"ID":"96a1fe7e-40d9-4b61-b28b-8e1714277767","Type":"ContainerStarted","Data":"364827be87006f9a7c37110366d20da1c59df5f696b974e03fba52722fc2287f"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.422348 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" event={"ID":"6514c538-c59e-4743-97a6-3c11d74fa12e","Type":"ContainerStarted","Data":"ae811b16efec1d23a8bda3d5101883d2251d5eb33ddf6ce04e82c3b2be7c4715"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.423225 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" event={"ID":"368c3d3a-9216-40ce-a0b7-17d490873b85","Type":"ContainerStarted","Data":"ac6d62db85942e25bbf892669dab5d688dbcbdfd73a12f66f5b9a93fb444f1e2"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.428955 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" event={"ID":"35c96e7e-52b3-46ed-a74a-f7a42a153a40","Type":"ContainerStarted","Data":"40855446d92cf95938a23d9d8fb51453ad38b797a1043d071441b336f9e60b04"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.430413 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.430935 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:37:59.930916767 +0000 UTC m=+198.192057187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.432093 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" event={"ID":"4fb507a2-c21f-457c-9677-2afefb9a4d8c","Type":"ContainerStarted","Data":"168eb4fe524d3a512e7aae319bfbdc09530e448706f59b4f59d6f2fbe92f5371"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.448702 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.450370 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w7qqt" event={"ID":"7ae7c46e-c974-471c-8f96-1dc0fd38e49d","Type":"ContainerStarted","Data":"9f63192e446cdeb7130afa8c3a3ddfd1cd59e3d954193a326c969f7661675fa8"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.451308 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-w7qqt" Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.452417 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fpgvv"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.453090 4941 patch_prober.go:28] interesting pod/downloads-7954f5f757-w7qqt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.453159 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w7qqt" podUID="7ae7c46e-c974-471c-8f96-1dc0fd38e49d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.466157 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" event={"ID":"533e82fe-cb1a-462c-a5cb-097cce12524e","Type":"ContainerStarted","Data":"640b263c2405712895c24a9734bc3d3a25a55fcc9caddc88b09ef29d05c6c5a6"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.471590 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" event={"ID":"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04","Type":"ContainerStarted","Data":"009b73a01eebce70f8fca482c4bdca8be89779caa4a09a531b5f053402e270c9"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.487012 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" event={"ID":"51c910e0-3160-484d-9b5c-7e606f3a1a8d","Type":"ContainerStarted","Data":"c87bdced1fbc3f5bd6af117ab6c294fa9aedd51266d1d195f7cb22f1bc17c496"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.504884 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" event={"ID":"51e23993-d4b6-4929-b14c-c4a920b35760","Type":"ContainerStarted","Data":"e2db4964dff73a389170ce6517ab7e83c1ae77da15ebfec8228b3039d26ef7d4"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.506991 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" event={"ID":"50000567-dad9-4bab-9db9-ecd69cf07609","Type":"ContainerStarted","Data":"322dab92f74fe21b7930ea8e0ceb2e7e0be340a4511ea595c1986182373472e4"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.508095 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" event={"ID":"1bdd7236-3746-4764-8724-aab038391bea","Type":"ContainerStarted","Data":"b0770a6af1dc5c577766f3414750323d13fa3ed6e5ca33026b8205f304b3d74b"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.509398 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" event={"ID":"7e912d24-8387-4a52-97ee-cfdb927b58cf","Type":"ContainerStarted","Data":"ce7cde29e3796060fb9a1b62312d68647a3f8ef541ca82d513f749d90c6d98ce"} Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.531853 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.532267 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.032234797 +0000 UTC m=+198.293375217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.532623 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.533361 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.033340368 +0000 UTC m=+198.294480788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.634854 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.636064 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.136022078 +0000 UTC m=+198.397162548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: W0227 19:37:59.673259 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54f9f137_2092_4b45_bbb6_4cceb6ef3cb5.slice/crio-dafe6f34be0a6c6f586a55091fb4bb7b9064a1bc42875d1fd5e1297a6a9f3836 WatchSource:0}: Error finding container dafe6f34be0a6c6f586a55091fb4bb7b9064a1bc42875d1fd5e1297a6a9f3836: Status 404 returned error can't find the container with id dafe6f34be0a6c6f586a55091fb4bb7b9064a1bc42875d1fd5e1297a6a9f3836 Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.750594 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.751451 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.251430476 +0000 UTC m=+198.512570896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.870250 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.870879 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.370848928 +0000 UTC m=+198.631989348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.871452 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.883110 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.383083743 +0000 UTC m=+198.644224163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.940983 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-52mbw"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.945245 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z7cwj"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.948077 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d"] Feb 27 19:37:59 crc kubenswrapper[4941]: I0227 19:37:59.980963 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:37:59 crc kubenswrapper[4941]: E0227 19:37:59.982237 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.482212612 +0000 UTC m=+198.743353052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.026513 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kvkk5"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.028797 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.058926 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.063398 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.079019 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v9sth"] Feb 27 19:38:00 crc kubenswrapper[4941]: W0227 19:38:00.079335 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb954dc5_4c54_4bec_9044_b36dc67e4920.slice/crio-a773a663e4873be4592260aedea13eaf03b267198452a17a6f7eb7ee8256f7f4 WatchSource:0}: Error finding container a773a663e4873be4592260aedea13eaf03b267198452a17a6f7eb7ee8256f7f4: Status 404 returned error can't find the container with id a773a663e4873be4592260aedea13eaf03b267198452a17a6f7eb7ee8256f7f4 Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.081437 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.083219 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.083933 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.583915003 +0000 UTC m=+198.845055423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.154704 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537018-k4vjk"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.155428 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.159763 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537018-k4vjk"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.184891 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.185530 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.185663 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.685640895 +0000 UTC m=+198.946781315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.186081 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psgk4\" (UniqueName: \"kubernetes.io/projected/0d98f658-1f8e-41f5-bc4e-2f442243e453-kube-api-access-psgk4\") pod \"auto-csr-approver-29537018-k4vjk\" (UID: \"0d98f658-1f8e-41f5-bc4e-2f442243e453\") " pod="openshift-infra/auto-csr-approver-29537018-k4vjk" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.186422 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.186944 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.686935322 +0000 UTC m=+198.948075742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.206164 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.290926 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.291880 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.791854984 +0000 UTC m=+199.052995404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.294408 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psgk4\" (UniqueName: \"kubernetes.io/projected/0d98f658-1f8e-41f5-bc4e-2f442243e453-kube-api-access-psgk4\") pod \"auto-csr-approver-29537018-k4vjk\" (UID: \"0d98f658-1f8e-41f5-bc4e-2f442243e453\") " pod="openshift-infra/auto-csr-approver-29537018-k4vjk" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.295635 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w969m"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.298405 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.303309 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.803278757 +0000 UTC m=+199.064419177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.304177 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.306270 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5pbpw"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.308190 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-knckv"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.313145 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.313223 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.317848 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.328917 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-w7qqt" podStartSLOduration=155.32889817 podStartE2EDuration="2m35.32889817s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:00.324622579 +0000 UTC m=+198.585762999" watchObservedRunningTime="2026-02-27 19:38:00.32889817 +0000 UTC m=+198.590038600" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.346266 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zln9d"] Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.371463 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psgk4\" (UniqueName: \"kubernetes.io/projected/0d98f658-1f8e-41f5-bc4e-2f442243e453-kube-api-access-psgk4\") pod \"auto-csr-approver-29537018-k4vjk\" (UID: \"0d98f658-1f8e-41f5-bc4e-2f442243e453\") " pod="openshift-infra/auto-csr-approver-29537018-k4vjk" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.389357 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzs75"] Feb 27 19:38:00 crc kubenswrapper[4941]: W0227 19:38:00.393801 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21bfb754_9e4f_424a_879f_bc9e3d7dd163.slice/crio-9dfa55c9e427e3bb2f1bb83d6978e308f09e70cd0a58e435536b8e696fb92a8d WatchSource:0}: Error finding container 9dfa55c9e427e3bb2f1bb83d6978e308f09e70cd0a58e435536b8e696fb92a8d: Status 404 returned error can't find the container with id 9dfa55c9e427e3bb2f1bb83d6978e308f09e70cd0a58e435536b8e696fb92a8d Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.400595 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.401165 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:00.90114229 +0000 UTC m=+199.162282710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.469037 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" podStartSLOduration=155.469001706 podStartE2EDuration="2m35.469001706s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:00.41777777 +0000 UTC m=+198.678918200" watchObservedRunningTime="2026-02-27 19:38:00.469001706 +0000 UTC m=+198.730142126" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.477900 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.502760 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.503203 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.003184531 +0000 UTC m=+199.264324951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.515984 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lx7mk" podStartSLOduration=155.515942821 podStartE2EDuration="2m35.515942821s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:00.512207666 +0000 UTC m=+198.773348086" watchObservedRunningTime="2026-02-27 19:38:00.515942821 +0000 UTC m=+198.777083241" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.521169 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" event={"ID":"a4cbc91c-fdb8-4cbe-bdbe-ff43a4f8cffa","Type":"ContainerStarted","Data":"aaf0e38cbb9788ca95dfdb0c614e4ee8762f0ee1ef69f0b1705dd4d47e2f4058"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.522815 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" event={"ID":"a3878e2f-db0c-4078-9400-ff01ebfb02c6","Type":"ContainerStarted","Data":"7523e708893a2491fedc604cf3b5e0632244e9b7a7372bfb391ab2a7a861d4fa"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.523547 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" event={"ID":"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a","Type":"ContainerStarted","Data":"5ff0cf75399756f8b3c7ff4c9dc7aff3b2cfb60d5586421013887267410e4095"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.527281 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" event={"ID":"a380f8cf-f0a1-41d0-ac65-4f664f543f4d","Type":"ContainerStarted","Data":"b37fc5d9e1e7189d51d431eb4f64da98708aa9ce26e42ceb0634857f852dfc96"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.527644 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.528418 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" event={"ID":"00649828-b271-4dfc-bcaa-e680c9c35a5a","Type":"ContainerStarted","Data":"5ca10345caaea0c71221413528fea643068f55aca7f6c99395134c66f36b2f8c"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.529374 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" event={"ID":"be0c9fa2-2973-45e6-ad4a-4202d6b18a24","Type":"ContainerStarted","Data":"659760350a359af18ee1d7b48f3037686748ab023e429c0371d8e2c46897431d"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.530945 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" event={"ID":"88a3699f-6925-4273-a865-2070e5c8cb98","Type":"ContainerStarted","Data":"493c59fcd8a2117bf59d124d9fbc6e505eaa1261f2874631ed2e8d6d67ebf349"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.530987 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" event={"ID":"88a3699f-6925-4273-a865-2070e5c8cb98","Type":"ContainerStarted","Data":"082c82292cd6e16866375fba0a991caa03dce4272079528a5d1e7a71398d5fb9"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.532088 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-knckv" event={"ID":"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca","Type":"ContainerStarted","Data":"11c5cec0f3ee05c3c1a359bf2045378bfb5856665af96d06d37f059599b0a479"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.532737 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" event={"ID":"fe89c478-f5be-4a2b-8a2c-2448dd1f778a","Type":"ContainerStarted","Data":"0ea26591d8828ad59ef559e02687cea1c9bc13e74159dfe3663047d3472c556b"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.532969 4941 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xdq5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.533010 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" podUID="a380f8cf-f0a1-41d0-ac65-4f664f543f4d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.534655 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" event={"ID":"51c910e0-3160-484d-9b5c-7e606f3a1a8d","Type":"ContainerStarted","Data":"a1eae4d205ba59144d2e0041b8acf1554c26b558a1d15d56fff4da4366dcfa87"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.536898 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5krkp" podStartSLOduration=155.536888363 podStartE2EDuration="2m35.536888363s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:00.535763951 +0000 UTC m=+198.796904361" watchObservedRunningTime="2026-02-27 19:38:00.536888363 +0000 UTC m=+198.798028783" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.548599 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" event={"ID":"9c1a6007-ff5e-4d9d-8584-0e9c4048978b","Type":"ContainerStarted","Data":"e431e3ad936422b1c9e0515a84967d347c4be45c52bdfc1ac9e9b992867ae033"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.552854 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" event={"ID":"1bdd7236-3746-4764-8724-aab038391bea","Type":"ContainerStarted","Data":"5d36b17ca964b77979a162d4f9d35ece2f915550b167324737e400c8fa160854"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.567517 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" event={"ID":"7e912d24-8387-4a52-97ee-cfdb927b58cf","Type":"ContainerStarted","Data":"9537c4e91bdeac1271ae01716373e1ac79b166249e4eb7ebcb1ded81e4ef705a"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.569608 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" event={"ID":"21bfb754-9e4f-424a-879f-bc9e3d7dd163","Type":"ContainerStarted","Data":"9dfa55c9e427e3bb2f1bb83d6978e308f09e70cd0a58e435536b8e696fb92a8d"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.575601 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" event={"ID":"533e82fe-cb1a-462c-a5cb-097cce12524e","Type":"ContainerStarted","Data":"4d40705c686c2a889acbb635772a275f8fbdf6748f0d5555114fa4f8639332ed"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.579666 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" event={"ID":"297422d4-75d1-4b5e-a106-408b239e43c0","Type":"ContainerStarted","Data":"2cd94326eea6378facdc4f689baef2cc5fd210be79115157dd38d914e7788a00"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.580176 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m4whq" podStartSLOduration=155.580155614 podStartE2EDuration="2m35.580155614s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:00.576770339 +0000 UTC m=+198.837910759" watchObservedRunningTime="2026-02-27 19:38:00.580155614 +0000 UTC m=+198.841296034" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.581970 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" event={"ID":"47334960-9820-4b3a-be9d-e01a0e4a39ba","Type":"ContainerStarted","Data":"576fc69a92031b815c68eddc15eb23b83d3c775bdfcdcdd8fa7a2407a2e37b52"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.583263 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" event={"ID":"97e2661c-8124-4c95-a2c4-deb0e07cb14f","Type":"ContainerStarted","Data":"f9c88c3fe67fa5782e1dd9721d1ba348be0f9f97974009b0e5327384ad52b1a7"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.598108 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" event={"ID":"fb954dc5-4c54-4bec-9044-b36dc67e4920","Type":"ContainerStarted","Data":"a773a663e4873be4592260aedea13eaf03b267198452a17a6f7eb7ee8256f7f4"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.602818 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" event={"ID":"ef2e0cc1-6a4b-4eca-81ed-fbb42c3f7210","Type":"ContainerStarted","Data":"3a0b4346170213c98ab10a6c1ef2a5b0cf5f0e7b8d3a4165361245e8c7fa4f7c"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.603290 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.603538 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.103514594 +0000 UTC m=+199.364655054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.603765 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.604939 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.104931094 +0000 UTC m=+199.366071514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.611514 4941 generic.go:334] "Generic (PLEG): container finished" podID="0c252274-a47f-4da8-b561-bbc47afaa507" containerID="fd4f4084818c6c2990874a5329b2bd3b6f9c50453b20c6241b5bfd65b446368c" exitCode=0 Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.612460 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" event={"ID":"0c252274-a47f-4da8-b561-bbc47afaa507","Type":"ContainerDied","Data":"fd4f4084818c6c2990874a5329b2bd3b6f9c50453b20c6241b5bfd65b446368c"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.616541 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2wswg" event={"ID":"50000567-dad9-4bab-9db9-ecd69cf07609","Type":"ContainerStarted","Data":"52d1433b3adca8ed569081dae19e0f24f793302d63ac27df3bc293c955bfee57"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.622009 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" podStartSLOduration=155.621989255 podStartE2EDuration="2m35.621989255s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:00.612264231 +0000 UTC m=+198.873404661" watchObservedRunningTime="2026-02-27 19:38:00.621989255 +0000 UTC m=+198.883129675" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.627944 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" event={"ID":"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04","Type":"ContainerStarted","Data":"ca1ad180adb193d4d993615769b4c42c00b5403583e5739f5d6240668fd8f10e"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.630209 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gndt2" event={"ID":"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5","Type":"ContainerStarted","Data":"dafe6f34be0a6c6f586a55091fb4bb7b9064a1bc42875d1fd5e1297a6a9f3836"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.637509 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-52mbw" event={"ID":"8116add2-08c1-41a0-8868-049cacc07ae0","Type":"ContainerStarted","Data":"84e49cf33093938edc119c026d8f07f6c97cc7121030caf179d8f7e8b9c9d051"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.638295 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w969m" event={"ID":"aec473dd-8e2a-4af0-98fb-95e442141a92","Type":"ContainerStarted","Data":"44e6d361aed9b45df11438b709c2e6a13d0b42683b8ea9f2776196f43f9d97f3"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.639642 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" event={"ID":"53dffcc9-85c5-4742-98f9-4ffb32ad20f6","Type":"ContainerStarted","Data":"96ff911425f8c82420e513a585e3173504943ee49b9d4577f0f2c66a15233355"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.640702 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" event={"ID":"d55e2208-e68a-461a-873a-cb8503a7dfd1","Type":"ContainerStarted","Data":"9c07436495968be7f312d207792afd20c14c26bda2c62a6ccef30d4e263476b2"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.642876 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" event={"ID":"51e23993-d4b6-4929-b14c-c4a920b35760","Type":"ContainerStarted","Data":"0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.643645 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" event={"ID":"8b8a2380-9d06-4f11-9ce7-4ca7be32767e","Type":"ContainerStarted","Data":"845349e871ce7eefe0ea72e5341a0de7f43aa4affc29c10de12c818242e7d69a"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.644531 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fpgvv" event={"ID":"15767381-d283-4c54-8c38-19d68dec9371","Type":"ContainerStarted","Data":"ddc826f0114f610cd6b34c5de4fb24fe073acb02e857a5c926d86c74ad258f8d"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.646254 4941 generic.go:334] "Generic (PLEG): container finished" podID="6514c538-c59e-4743-97a6-3c11d74fa12e" containerID="e8d40b7e846de86ae15abb4ca1d97f2e45bbe555663928e09bfd031da9e7666e" exitCode=0 Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.646362 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" event={"ID":"6514c538-c59e-4743-97a6-3c11d74fa12e","Type":"ContainerDied","Data":"e8d40b7e846de86ae15abb4ca1d97f2e45bbe555663928e09bfd031da9e7666e"} Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.651517 4941 patch_prober.go:28] interesting pod/downloads-7954f5f757-w7qqt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.651586 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w7qqt" podUID="7ae7c46e-c974-471c-8f96-1dc0fd38e49d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.657281 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-68mk8" podStartSLOduration=155.657251921 podStartE2EDuration="2m35.657251921s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:00.656747687 +0000 UTC m=+198.917888107" watchObservedRunningTime="2026-02-27 19:38:00.657251921 +0000 UTC m=+198.918392331" Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.704982 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.705206 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.205183474 +0000 UTC m=+199.466323894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.705370 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.706198 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.206187303 +0000 UTC m=+199.467327803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.806264 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.807019 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.307001639 +0000 UTC m=+199.568142059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.807365 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.807890 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.307880974 +0000 UTC m=+199.569021394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.909521 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.909927 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.409901914 +0000 UTC m=+199.671042334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:00 crc kubenswrapper[4941]: I0227 19:38:00.910762 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:00 crc kubenswrapper[4941]: E0227 19:38:00.911082 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.411073887 +0000 UTC m=+199.672214307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.003461 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537018-k4vjk"] Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.013386 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.013671 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.513649413 +0000 UTC m=+199.774789843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.013739 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.014091 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.514079356 +0000 UTC m=+199.775219786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.114583 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.114705 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.614672066 +0000 UTC m=+199.875812486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.114863 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.115415 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.615391446 +0000 UTC m=+199.876531926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.216505 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.217033 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.716948983 +0000 UTC m=+199.978089443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.303492 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.303578 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.318629 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.319383 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.819361165 +0000 UTC m=+200.080501605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.420293 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.420603 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.920567232 +0000 UTC m=+200.181707682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.421022 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.421566 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:01.92154507 +0000 UTC m=+200.182685530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.522549 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.522735 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.022698846 +0000 UTC m=+200.283839286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.522914 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.523354 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.023338364 +0000 UTC m=+200.284478804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.623725 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.624308 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.124289564 +0000 UTC m=+200.385429994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: W0227 19:38:01.658817 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d98f658_1f8e_41f5_bc4e_2f442243e453.slice/crio-40de8ce30ce2580dd76aba2b4ec4089f1e4c4007fe188c42b726eda5c42b1559 WatchSource:0}: Error finding container 40de8ce30ce2580dd76aba2b4ec4089f1e4c4007fe188c42b726eda5c42b1559: Status 404 returned error can't find the container with id 40de8ce30ce2580dd76aba2b4ec4089f1e4c4007fe188c42b726eda5c42b1559 Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.718704 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-52mbw" event={"ID":"8116add2-08c1-41a0-8868-049cacc07ae0","Type":"ContainerStarted","Data":"e0406de40f7846d7328578482f83ed138c1f4794dbea249af9fccb6a137b9e6b"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.726155 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.726421 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.226407387 +0000 UTC m=+200.487547807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.746991 4941 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.751077 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" event={"ID":"00649828-b271-4dfc-bcaa-e680c9c35a5a","Type":"ContainerStarted","Data":"ffd8f7719246e00516e272569216a19710a5701bf38726c58badbf4edb61d5e8"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.756883 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" event={"ID":"35c96e7e-52b3-46ed-a74a-f7a42a153a40","Type":"ContainerStarted","Data":"05f21f6c85fc796d2a843d3c6c038ac40ae13b86f16f50e51865212fd0489791"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.772209 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" event={"ID":"d55e2208-e68a-461a-873a-cb8503a7dfd1","Type":"ContainerStarted","Data":"b54144795acf7c1be28eed8e9e4b230e40715cdee9225a3c86fa43bd579e4c81"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.777337 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fpgvv" event={"ID":"15767381-d283-4c54-8c38-19d68dec9371","Type":"ContainerStarted","Data":"0ce92d32d149fd387a484ca516c6070c556356b0bcbd895c0f47f4728e02d8ef"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.781911 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" event={"ID":"b70f7996-0cfa-4eb2-896e-49fdaaf5c07a","Type":"ContainerStarted","Data":"37cb54691ecad749dc92594c14a13360bafe589b8a5da7dfbcaee115941af045"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.787290 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gndt2" event={"ID":"54f9f137-2092-4b45-bbb6-4cceb6ef3cb5","Type":"ContainerStarted","Data":"131a8d14b9951d49c492480fd033053e17066548aac86460ef08b4f8a12545fe"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.830937 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" event={"ID":"fe89c478-f5be-4a2b-8a2c-2448dd1f778a","Type":"ContainerStarted","Data":"157a3c6a41ad2348f8d69770245575141e7a9847d37e07677121c4b2723e072d"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.833016 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.833342 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.333325626 +0000 UTC m=+200.594466036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.876339 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" event={"ID":"815c7d43-e40d-4519-80ce-13df0e8d63ff","Type":"ContainerStarted","Data":"7ce3bae6f073409e2d8043eccf85bde055d6cd9d1856902d73db8c9d796a3b5f"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.901648 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kptkw" event={"ID":"0386aa95-c139-4357-8a92-c610b4b32709","Type":"ContainerStarted","Data":"1979b7d411696691c08fe0c14a3cb4d548ba028b189740e03a709ad930ab7092"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.902577 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.907119 4941 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xdq5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.907164 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" podUID="a380f8cf-f0a1-41d0-ac65-4f664f543f4d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.907209 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" event={"ID":"fb954dc5-4c54-4bec-9044-b36dc67e4920","Type":"ContainerStarted","Data":"2b1be9bd25f15c5845a6e628a8611251d23cdf3ffd8ff0167a6d6278a33499bd"} Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.907336 4941 patch_prober.go:28] interesting pod/console-operator-58897d9998-kptkw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.907404 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kptkw" podUID="0386aa95-c139-4357-8a92-c610b4b32709" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.908658 4941 patch_prober.go:28] interesting pod/downloads-7954f5f757-w7qqt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.908689 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w7qqt" podUID="7ae7c46e-c974-471c-8f96-1dc0fd38e49d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.934879 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:01 crc kubenswrapper[4941]: E0227 19:38:01.945153 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.445132723 +0000 UTC m=+200.706273143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.961686 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r8fcm" podStartSLOduration=156.96167086 podStartE2EDuration="2m36.96167086s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:01.960760424 +0000 UTC m=+200.221900844" watchObservedRunningTime="2026-02-27 19:38:01.96167086 +0000 UTC m=+200.222811280" Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.962245 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-kptkw" podStartSLOduration=156.962241646 podStartE2EDuration="2m36.962241646s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:01.936456518 +0000 UTC m=+200.197596938" watchObservedRunningTime="2026-02-27 19:38:01.962241646 +0000 UTC m=+200.223382066" Feb 27 19:38:01 crc kubenswrapper[4941]: I0227 19:38:01.999409 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" podStartSLOduration=156.999391025 podStartE2EDuration="2m36.999391025s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:01.995844245 +0000 UTC m=+200.256984665" watchObservedRunningTime="2026-02-27 19:38:01.999391025 +0000 UTC m=+200.260531445" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.045990 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.046251 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.546231097 +0000 UTC m=+200.807371517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.046288 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.046666 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.546650219 +0000 UTC m=+200.807790639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.059578 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" podStartSLOduration=157.059559054 podStartE2EDuration="2m37.059559054s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:02.056119167 +0000 UTC m=+200.317259587" watchObservedRunningTime="2026-02-27 19:38:02.059559054 +0000 UTC m=+200.320699474" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.076255 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kvkk5" podStartSLOduration=156.076238245 podStartE2EDuration="2m36.076238245s" podCreationTimestamp="2026-02-27 19:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:02.075170144 +0000 UTC m=+200.336310564" watchObservedRunningTime="2026-02-27 19:38:02.076238245 +0000 UTC m=+200.337378655" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.123159 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bmrxx" podStartSLOduration=157.123141989 podStartE2EDuration="2m37.123141989s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:02.122978304 +0000 UTC m=+200.384118724" watchObservedRunningTime="2026-02-27 19:38:02.123141989 +0000 UTC m=+200.384282409" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.124351 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" podStartSLOduration=156.124340773 podStartE2EDuration="2m36.124340773s" podCreationTimestamp="2026-02-27 19:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:02.095598461 +0000 UTC m=+200.356738881" watchObservedRunningTime="2026-02-27 19:38:02.124340773 +0000 UTC m=+200.385481193" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.146279 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-twcv6" podStartSLOduration=157.146261492 podStartE2EDuration="2m37.146261492s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:02.143227526 +0000 UTC m=+200.404367956" watchObservedRunningTime="2026-02-27 19:38:02.146261492 +0000 UTC m=+200.407401912" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.147563 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.147924 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.647907068 +0000 UTC m=+200.909047488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.248692 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.249195 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.749183578 +0000 UTC m=+201.010323998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.308015 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:02 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:02 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:02 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.308488 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.358160 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.358415 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.85837317 +0000 UTC m=+201.119513600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.358795 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.359190 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.859179053 +0000 UTC m=+201.120319473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.461174 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.461571 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.961550744 +0000 UTC m=+201.222691164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.461815 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.462254 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:02.962243443 +0000 UTC m=+201.223383863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.564327 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.564799 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.064780317 +0000 UTC m=+201.325920737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.665887 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.666277 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.166263022 +0000 UTC m=+201.427403442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.767507 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.767908 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.267889892 +0000 UTC m=+201.529030312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.869590 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.869934 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.369917592 +0000 UTC m=+201.631058012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.916868 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" event={"ID":"6514c538-c59e-4743-97a6-3c11d74fa12e","Type":"ContainerStarted","Data":"e9545adf1821653825cc1db4705b3abad0de74d0b06669d5c353841b670c53b2"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.916939 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.918673 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" event={"ID":"fe89c478-f5be-4a2b-8a2c-2448dd1f778a","Type":"ContainerStarted","Data":"0b03709742c71462ce424d85aa6c9ee14553e08985f1a0373b85322f1653e175"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.918788 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.920795 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" event={"ID":"21bfb754-9e4f-424a-879f-bc9e3d7dd163","Type":"ContainerStarted","Data":"b816dc5f33fb1fafc26726396ed35c3f09f1194194686af8d315fb49c1b5dd9b"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.924332 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" event={"ID":"0d98f658-1f8e-41f5-bc4e-2f442243e453","Type":"ContainerStarted","Data":"40de8ce30ce2580dd76aba2b4ec4089f1e4c4007fe188c42b726eda5c42b1559"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.926741 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" event={"ID":"51c910e0-3160-484d-9b5c-7e606f3a1a8d","Type":"ContainerStarted","Data":"c101c1dc4f4a503dd6fc9ce83a99f31b3eac9e9c7bfada97c4e4a4ad66d33da5"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.928678 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" event={"ID":"0c252274-a47f-4da8-b561-bbc47afaa507","Type":"ContainerStarted","Data":"2b73b8594ccc4d36e0ef52f23e746157f57402f14ac59a8dd207f8cfcc5a378d"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.943790 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" event={"ID":"35c96e7e-52b3-46ed-a74a-f7a42a153a40","Type":"ContainerStarted","Data":"abe84a3e5414dd97c53a8b91402bc5d4356ca16db0e19471fa6a48b09f96bbf7"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.947918 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" event={"ID":"a3878e2f-db0c-4078-9400-ff01ebfb02c6","Type":"ContainerStarted","Data":"79cab8e952765df339f76b9394d83416fb8aee13d9e7182eea8605aa0fa6e5d0"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.947982 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" event={"ID":"a3878e2f-db0c-4078-9400-ff01ebfb02c6","Type":"ContainerStarted","Data":"35f2f03e57ecbdaeec564f0f491632d30288c909e4e199223cbb5c44971e0e3d"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.955629 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" event={"ID":"8b8a2380-9d06-4f11-9ce7-4ca7be32767e","Type":"ContainerStarted","Data":"98797ac526f496e6190bf7fd8f0ad2b8a9e7aa8b7696262fea28722cf69917d1"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.955687 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" event={"ID":"8b8a2380-9d06-4f11-9ce7-4ca7be32767e","Type":"ContainerStarted","Data":"9d7209d285ffecd174b751a12427f57537bdeaf90962a754df4c8f36f631ccf0"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.968602 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" event={"ID":"47334960-9820-4b3a-be9d-e01a0e4a39ba","Type":"ContainerStarted","Data":"61dfeb025edee1e01ea9e54325184c6cbb1618b9ea0b5bdb58d7cae1bd32bfc1"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.970659 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" event={"ID":"97e2661c-8124-4c95-a2c4-deb0e07cb14f","Type":"ContainerStarted","Data":"ddb6c404d693d64f0dbfd5d5a7e00a17e0986cde003c5cf9608f40f9bb75264c"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.971540 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.972134 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:02 crc kubenswrapper[4941]: E0227 19:38:02.973341 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.473324512 +0000 UTC m=+201.734464922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.975279 4941 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzs75 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.975320 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" podUID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.976052 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" event={"ID":"3db07a98-f6f1-4aa8-9ca9-1989dfc61f04","Type":"ContainerStarted","Data":"f6c415ec8648f3fd8521184037410c7f33129c00c905f9a44ca1d55836316992"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.981615 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" podStartSLOduration=157.981593525 podStartE2EDuration="2m37.981593525s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:02.953976586 +0000 UTC m=+201.215117016" watchObservedRunningTime="2026-02-27 19:38:02.981593525 +0000 UTC m=+201.242733945" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.984238 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nb4c4" podStartSLOduration=157.98421951 podStartE2EDuration="2m37.98421951s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:02.980995739 +0000 UTC m=+201.242136159" watchObservedRunningTime="2026-02-27 19:38:02.98421951 +0000 UTC m=+201.245359930" Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.993726 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-knckv" event={"ID":"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca","Type":"ContainerStarted","Data":"fae0134e238d2e8e3e89fe8460c8843f26c09de42cbc22f4c992ae8c6763f74f"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.993771 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-knckv" event={"ID":"e37fd7c1-73ee-445d-9b1c-9d3ed03fe2ca","Type":"ContainerStarted","Data":"bf2938154d2057645094217062d8db61a1e160c4b8f20415ec8b4283d5a74317"} Feb 27 19:38:02 crc kubenswrapper[4941]: I0227 19:38:02.994324 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-knckv" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.002939 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" podStartSLOduration=158.002921098 podStartE2EDuration="2m38.002921098s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.001797936 +0000 UTC m=+201.262938356" watchObservedRunningTime="2026-02-27 19:38:03.002921098 +0000 UTC m=+201.264061508" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.006054 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" event={"ID":"297422d4-75d1-4b5e-a106-408b239e43c0","Type":"ContainerStarted","Data":"11da428b0b1d9dc7d8ede5ca0e1ee152bce34e748e6b06c1d88cb78c9517b1d0"} Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.009871 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" event={"ID":"be0c9fa2-2973-45e6-ad4a-4202d6b18a24","Type":"ContainerStarted","Data":"ac52d66db673376fa00717d4e27f8398f404f5435bc4449db3e869ddfdfe9c27"} Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.009915 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" event={"ID":"be0c9fa2-2973-45e6-ad4a-4202d6b18a24","Type":"ContainerStarted","Data":"ec33d389288aac1488acf1a65990f2f5827df9b81f826406e91872444e0a7446"} Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.023311 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" event={"ID":"53dffcc9-85c5-4742-98f9-4ffb32ad20f6","Type":"ContainerStarted","Data":"1a8dd675f54f11818a149dc48594bb6e71bef514229d6c08ee3c40376110eeca"} Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.028792 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" event={"ID":"1bdd7236-3746-4764-8724-aab038391bea","Type":"ContainerStarted","Data":"6664dd4f4981487a47c53bd49f1bfa153ee6c9f0d9b2c38e73629e326ddd5762"} Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.036493 4941 generic.go:334] "Generic (PLEG): container finished" podID="211c8e09-1aae-466b-8dba-daab4d60d3cd" containerID="58df917dc911dacbdb71329854b505cb1ae01f4e34ed164e661e9de9decee011" exitCode=0 Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.036597 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" event={"ID":"211c8e09-1aae-466b-8dba-daab4d60d3cd","Type":"ContainerDied","Data":"58df917dc911dacbdb71329854b505cb1ae01f4e34ed164e661e9de9decee011"} Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.036724 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c9q44" podStartSLOduration=158.036706422 podStartE2EDuration="2m38.036706422s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.035823787 +0000 UTC m=+201.296964207" watchObservedRunningTime="2026-02-27 19:38:03.036706422 +0000 UTC m=+201.297846842" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.045625 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" event={"ID":"96a1fe7e-40d9-4b61-b28b-8e1714277767","Type":"ContainerStarted","Data":"d7ca9319e7fec743bae828d745f705a0bf07e4999fc4e56a1d18f05e9a1f2173"} Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.047317 4941 patch_prober.go:28] interesting pod/console-operator-58897d9998-kptkw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.047381 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kptkw" podUID="0386aa95-c139-4357-8a92-c610b4b32709" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.094247 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.096817 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.596800418 +0000 UTC m=+201.857940898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.122617 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" podStartSLOduration=158.122594716 podStartE2EDuration="2m38.122594716s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.077182864 +0000 UTC m=+201.338323284" watchObservedRunningTime="2026-02-27 19:38:03.122594716 +0000 UTC m=+201.383735136" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.146363 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zln9d" podStartSLOduration=158.146345107 podStartE2EDuration="2m38.146345107s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.121758733 +0000 UTC m=+201.382899143" watchObservedRunningTime="2026-02-27 19:38:03.146345107 +0000 UTC m=+201.407485527" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.146781 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5pbpw" podStartSLOduration=158.146775919 podStartE2EDuration="2m38.146775919s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.14572387 +0000 UTC m=+201.406864300" watchObservedRunningTime="2026-02-27 19:38:03.146775919 +0000 UTC m=+201.407916339" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.164571 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" podStartSLOduration=158.164553871 podStartE2EDuration="2m38.164553871s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.163842341 +0000 UTC m=+201.424982761" watchObservedRunningTime="2026-02-27 19:38:03.164553871 +0000 UTC m=+201.425694291" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.183540 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-drm7j" podStartSLOduration=158.183524447 podStartE2EDuration="2m38.183524447s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.181330225 +0000 UTC m=+201.442470645" watchObservedRunningTime="2026-02-27 19:38:03.183524447 +0000 UTC m=+201.444664857" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.196363 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.197532 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.697516552 +0000 UTC m=+201.958656972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.235781 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v9sth" podStartSLOduration=157.235764022 podStartE2EDuration="2m37.235764022s" podCreationTimestamp="2026-02-27 19:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.234069744 +0000 UTC m=+201.495210164" watchObservedRunningTime="2026-02-27 19:38:03.235764022 +0000 UTC m=+201.496904442" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.236216 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-knckv" podStartSLOduration=8.236212184 podStartE2EDuration="8.236212184s" podCreationTimestamp="2026-02-27 19:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.212715631 +0000 UTC m=+201.473856051" watchObservedRunningTime="2026-02-27 19:38:03.236212184 +0000 UTC m=+201.497352604" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.314215 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.314638 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.814625728 +0000 UTC m=+202.075766148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.318062 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:03 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:03 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:03 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.318155 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.318298 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fpgvv" podStartSLOduration=158.318276971 podStartE2EDuration="2m38.318276971s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.312247911 +0000 UTC m=+201.573388341" watchObservedRunningTime="2026-02-27 19:38:03.318276971 +0000 UTC m=+201.579417381" Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.352934 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.353908 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:38:03 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:38:03 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psgk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537018-k4vjk_openshift-infra(0d98f658-1f8e-41f5-bc4e-2f442243e453): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:38:03 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.355214 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.389879 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" podStartSLOduration=158.389855472 podStartE2EDuration="2m38.389855472s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.388896835 +0000 UTC m=+201.650037255" watchObservedRunningTime="2026-02-27 19:38:03.389855472 +0000 UTC m=+201.650995892" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.391298 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gndt2" podStartSLOduration=8.391288463 podStartE2EDuration="8.391288463s" podCreationTimestamp="2026-02-27 19:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.353892457 +0000 UTC m=+201.615032877" watchObservedRunningTime="2026-02-27 19:38:03.391288463 +0000 UTC m=+201.652428883" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.414769 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.415308 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:03.91529137 +0000 UTC m=+202.176431790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.429625 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9s2rr" podStartSLOduration=158.429608995 podStartE2EDuration="2m38.429608995s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.428865674 +0000 UTC m=+201.690006084" watchObservedRunningTime="2026-02-27 19:38:03.429608995 +0000 UTC m=+201.690749415" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.445645 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hphdl" podStartSLOduration=158.445629977 podStartE2EDuration="2m38.445629977s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.443868537 +0000 UTC m=+201.705008957" watchObservedRunningTime="2026-02-27 19:38:03.445629977 +0000 UTC m=+201.706770397" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.507774 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-52mbw" podStartSLOduration=8.507755771 podStartE2EDuration="8.507755771s" podCreationTimestamp="2026-02-27 19:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.470929311 +0000 UTC m=+201.732069731" watchObservedRunningTime="2026-02-27 19:38:03.507755771 +0000 UTC m=+201.768896191" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.518014 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.518538 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.018520475 +0000 UTC m=+202.279660895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.532741 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll96c" podStartSLOduration=158.532723556 podStartE2EDuration="2m38.532723556s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.531354337 +0000 UTC m=+201.792494767" watchObservedRunningTime="2026-02-27 19:38:03.532723556 +0000 UTC m=+201.793863976" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.533823 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" podStartSLOduration=158.533819467 podStartE2EDuration="2m38.533819467s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.508803281 +0000 UTC m=+201.769943701" watchObservedRunningTime="2026-02-27 19:38:03.533819467 +0000 UTC m=+201.794959887" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.570855 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8m66" podStartSLOduration=158.570829462 podStartE2EDuration="2m38.570829462s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.554235284 +0000 UTC m=+201.815375704" watchObservedRunningTime="2026-02-27 19:38:03.570829462 +0000 UTC m=+201.831969882" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.622133 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.622567 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.122546622 +0000 UTC m=+202.383687052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.639980 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52232: no serving certificate available for the kubelet" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.643739 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rvzxg" podStartSLOduration=158.643461863 podStartE2EDuration="2m38.643461863s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.592610067 +0000 UTC m=+201.853750497" watchObservedRunningTime="2026-02-27 19:38:03.643461863 +0000 UTC m=+201.904602293" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.677733 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" podStartSLOduration=158.67771457999999 podStartE2EDuration="2m38.67771458s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.644397359 +0000 UTC m=+201.905537779" watchObservedRunningTime="2026-02-27 19:38:03.67771458 +0000 UTC m=+201.938855000" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.724840 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52238: no serving certificate available for the kubelet" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.726210 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.726679 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.226663222 +0000 UTC m=+202.487803652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.741143 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z7cwj" podStartSLOduration=158.741122 podStartE2EDuration="2m38.741122s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.691703525 +0000 UTC m=+201.952843965" watchObservedRunningTime="2026-02-27 19:38:03.741122 +0000 UTC m=+202.002262410" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.829330 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.829684 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.3296694 +0000 UTC m=+202.590809820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.834132 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52252: no serving certificate available for the kubelet" Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.931149 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:03 crc kubenswrapper[4941]: E0227 19:38:03.931805 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.431777713 +0000 UTC m=+202.692918133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:03 crc kubenswrapper[4941]: I0227 19:38:03.977831 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52256: no serving certificate available for the kubelet" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.032438 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.032921 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.532904488 +0000 UTC m=+202.794044898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.051835 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" event={"ID":"211c8e09-1aae-466b-8dba-daab4d60d3cd","Type":"ContainerStarted","Data":"165b9ae3a4c32e1fc7e7104f9323e3300450d4bbb8ab3322f300da74b137509d"} Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.053447 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w969m" event={"ID":"aec473dd-8e2a-4af0-98fb-95e442141a92","Type":"ContainerStarted","Data":"69ac5563d97abd8eb2adc359b2efab4ec140a6681bbc46843c89046d1e1e39ab"} Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.055067 4941 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzs75 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.055101 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" podUID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.057300 4941 patch_prober.go:28] interesting pod/console-operator-58897d9998-kptkw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.057387 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kptkw" podUID="0386aa95-c139-4357-8a92-c610b4b32709" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.059994 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.070852 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52258: no serving certificate available for the kubelet" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.097894 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-52wdw" podStartSLOduration=159.097874863 podStartE2EDuration="2m39.097874863s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:03.742021705 +0000 UTC m=+202.003162125" watchObservedRunningTime="2026-02-27 19:38:04.097874863 +0000 UTC m=+202.359015283" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.135088 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.135564 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.635547426 +0000 UTC m=+202.896687846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.180737 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52266: no serving certificate available for the kubelet" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.236683 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.236971 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.736921278 +0000 UTC m=+202.998061698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.237393 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.237880 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.737868585 +0000 UTC m=+202.999009005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.267548 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52276: no serving certificate available for the kubelet" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.306816 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:04 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:04 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:04 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.306906 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.338779 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.339069 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.839022741 +0000 UTC m=+203.100163161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.339256 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.339574 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.839560166 +0000 UTC m=+203.100700586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.376058 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52288: no serving certificate available for the kubelet" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.441322 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.441542 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.941515815 +0000 UTC m=+203.202656225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.441789 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.442208 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:04.942199394 +0000 UTC m=+203.203339814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.543205 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.543512 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.043453943 +0000 UTC m=+203.304594383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.644950 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.645412 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.145390961 +0000 UTC m=+203.406531441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.731388 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.732041 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.735592 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.735949 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.746014 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.746215 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.246187947 +0000 UTC m=+203.507328357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.746507 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.746804 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.246795434 +0000 UTC m=+203.507935844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.760325 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.849338 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.849859 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98336c71-e738-434a-9824-fbdd0419bd57-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98336c71-e738-434a-9824-fbdd0419bd57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.849979 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98336c71-e738-434a-9824-fbdd0419bd57-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98336c71-e738-434a-9824-fbdd0419bd57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.850129 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.350110701 +0000 UTC m=+203.611251121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.951552 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98336c71-e738-434a-9824-fbdd0419bd57-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98336c71-e738-434a-9824-fbdd0419bd57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.951941 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98336c71-e738-434a-9824-fbdd0419bd57-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98336c71-e738-434a-9824-fbdd0419bd57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.951966 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:04 crc kubenswrapper[4941]: E0227 19:38:04.952333 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.452317237 +0000 UTC m=+203.713457657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.952827 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98336c71-e738-434a-9824-fbdd0419bd57-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98336c71-e738-434a-9824-fbdd0419bd57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:04 crc kubenswrapper[4941]: I0227 19:38:04.995514 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98336c71-e738-434a-9824-fbdd0419bd57-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98336c71-e738-434a-9824-fbdd0419bd57\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.047275 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.053276 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.053415 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.553397611 +0000 UTC m=+203.814538021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.053629 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.053962 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.553953736 +0000 UTC m=+203.815094156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.058611 4941 generic.go:334] "Generic (PLEG): container finished" podID="533e82fe-cb1a-462c-a5cb-097cce12524e" containerID="4d40705c686c2a889acbb635772a275f8fbdf6748f0d5555114fa4f8639332ed" exitCode=0 Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.058748 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" event={"ID":"533e82fe-cb1a-462c-a5cb-097cce12524e","Type":"ContainerDied","Data":"4d40705c686c2a889acbb635772a275f8fbdf6748f0d5555114fa4f8639332ed"} Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.078062 4941 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xzs75 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.078158 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" podUID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.078372 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" event={"ID":"211c8e09-1aae-466b-8dba-daab4d60d3cd","Type":"ContainerStarted","Data":"64d746b67aa50b0e0c96182a414d8af7304cd5e23df32546001d15b4387ea83d"} Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.141603 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52292: no serving certificate available for the kubelet" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.145443 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n48bk"] Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.146439 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.154604 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.154809 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.654770633 +0000 UTC m=+203.915911053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.155048 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.156587 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.656575144 +0000 UTC m=+203.917715764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.168805 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.192433 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" podStartSLOduration=160.192407506 podStartE2EDuration="2m40.192407506s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:05.175374435 +0000 UTC m=+203.436514875" watchObservedRunningTime="2026-02-27 19:38:05.192407506 +0000 UTC m=+203.453547926" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.194810 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n48bk"] Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.258992 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.259155 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-utilities\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.259192 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrsg\" (UniqueName: \"kubernetes.io/projected/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-kube-api-access-ftrsg\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.259261 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-catalog-content\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.259403 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.759388117 +0000 UTC m=+204.020528527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.313149 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:05 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:05 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:05 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.313699 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.360420 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftrsg\" (UniqueName: \"kubernetes.io/projected/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-kube-api-access-ftrsg\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.360513 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-catalog-content\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.360542 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.360593 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-utilities\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.360947 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-utilities\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.361378 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-catalog-content\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.361629 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.861618643 +0000 UTC m=+204.122759063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.397521 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftrsg\" (UniqueName: \"kubernetes.io/projected/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-kube-api-access-ftrsg\") pod \"certified-operators-n48bk\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.446683 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k84zp"] Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.447988 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.461636 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.461841 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.961809832 +0000 UTC m=+204.222950252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.462037 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.462385 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:05.962377618 +0000 UTC m=+204.223518038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.469565 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.473032 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k84zp"] Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.564032 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.564439 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-catalog-content\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.564510 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s74z8\" (UniqueName: \"kubernetes.io/projected/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-kube-api-access-s74z8\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.564606 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-utilities\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.564800 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.064770209 +0000 UTC m=+204.325910629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.666695 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-catalog-content\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.666784 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s74z8\" (UniqueName: \"kubernetes.io/projected/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-kube-api-access-s74z8\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.666861 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.666937 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-utilities\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.667615 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-utilities\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.667940 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-catalog-content\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.668775 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.168754435 +0000 UTC m=+204.429894855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.723448 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s74z8\" (UniqueName: \"kubernetes.io/projected/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-kube-api-access-s74z8\") pod \"certified-operators-k84zp\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.763601 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.770191 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.770521 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.270501187 +0000 UTC m=+204.531641607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.839910 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdq5q"] Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.840128 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" podUID="a380f8cf-f0a1-41d0-ac65-4f664f543f4d" containerName="controller-manager" containerID="cri-o://b37fc5d9e1e7189d51d431eb4f64da98708aa9ce26e42ceb0634857f852dfc96" gracePeriod=30 Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.862119 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.875347 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.884632 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.384606819 +0000 UTC m=+204.645747239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.926393 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p"] Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.926600 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" podUID="51e23993-d4b6-4929-b14c-c4a920b35760" containerName="route-controller-manager" containerID="cri-o://0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700" gracePeriod=30 Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.927246 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.964116 4941 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-c877p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": read tcp 10.217.0.2:40158->10.217.0.26:8443: read: connection reset by peer" start-of-body= Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.964170 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" podUID="51e23993-d4b6-4929-b14c-c4a920b35760" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": read tcp 10.217.0.2:40158->10.217.0.26:8443: read: connection reset by peer" Feb 27 19:38:05 crc kubenswrapper[4941]: I0227 19:38:05.981162 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:05 crc kubenswrapper[4941]: E0227 19:38:05.981568 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.481551996 +0000 UTC m=+204.742692426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.059404 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n48bk"] Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.063379 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9kv24"] Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.064572 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.073876 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.088975 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:06 crc kubenswrapper[4941]: E0227 19:38:06.089339 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.589316279 +0000 UTC m=+204.850456729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.092087 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9kv24"] Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.196885 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w969m" event={"ID":"aec473dd-8e2a-4af0-98fb-95e442141a92","Type":"ContainerStarted","Data":"84c028c8f411555026cc36a81a646f6550b474e3b2e3a358d518d83a8b6a127a"} Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.197503 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.197659 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xd6\" (UniqueName: \"kubernetes.io/projected/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-kube-api-access-z6xd6\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.197697 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-utilities\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.197738 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-catalog-content\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: E0227 19:38:06.197839 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.697825352 +0000 UTC m=+204.958965772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.213363 4941 generic.go:334] "Generic (PLEG): container finished" podID="a380f8cf-f0a1-41d0-ac65-4f664f543f4d" containerID="b37fc5d9e1e7189d51d431eb4f64da98708aa9ce26e42ceb0634857f852dfc96" exitCode=0 Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.214536 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" event={"ID":"a380f8cf-f0a1-41d0-ac65-4f664f543f4d","Type":"ContainerDied","Data":"b37fc5d9e1e7189d51d431eb4f64da98708aa9ce26e42ceb0634857f852dfc96"} Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.305912 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-catalog-content\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.306246 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.306299 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xd6\" (UniqueName: \"kubernetes.io/projected/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-kube-api-access-z6xd6\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.306348 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-utilities\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.307261 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-catalog-content\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: E0227 19:38:06.307760 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.807748395 +0000 UTC m=+205.068888815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.308616 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:06 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:06 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:06 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.308644 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.309032 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-utilities\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.321939 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.352057 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xd6\" (UniqueName: \"kubernetes.io/projected/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-kube-api-access-z6xd6\") pod \"community-operators-9kv24\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.389911 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k84zp"] Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.414168 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:06 crc kubenswrapper[4941]: E0227 19:38:06.414306 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.914288303 +0000 UTC m=+205.175428723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.420678 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:06 crc kubenswrapper[4941]: E0227 19:38:06.421204 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:06.921186478 +0000 UTC m=+205.182326918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.452545 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.464046 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r9t78"] Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.475869 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.485361 4941 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.500381 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r9t78"] Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.504568 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52298: no serving certificate available for the kubelet" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.521708 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:06 crc kubenswrapper[4941]: E0227 19:38:06.522135 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 19:38:07.022117018 +0000 UTC m=+205.283257448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.572809 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.626044 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-catalog-content\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.626092 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.626115 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-utilities\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.626192 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xstmx\" (UniqueName: \"kubernetes.io/projected/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-kube-api-access-xstmx\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: E0227 19:38:06.626484 4941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 19:38:07.126457134 +0000 UTC m=+205.387597554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2h46l" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.688869 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.710671 4941 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-27T19:38:06.48538166Z","Handler":null,"Name":""} Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.733821 4941 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.733850 4941 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.735019 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcq9q\" (UniqueName: \"kubernetes.io/projected/51e23993-d4b6-4929-b14c-c4a920b35760-kube-api-access-dcq9q\") pod \"51e23993-d4b6-4929-b14c-c4a920b35760\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.735101 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.735176 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-config\") pod \"51e23993-d4b6-4929-b14c-c4a920b35760\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.735195 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e23993-d4b6-4929-b14c-c4a920b35760-serving-cert\") pod \"51e23993-d4b6-4929-b14c-c4a920b35760\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.735230 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-client-ca\") pod \"51e23993-d4b6-4929-b14c-c4a920b35760\" (UID: \"51e23993-d4b6-4929-b14c-c4a920b35760\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.735380 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xstmx\" (UniqueName: \"kubernetes.io/projected/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-kube-api-access-xstmx\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.735435 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-catalog-content\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.735492 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-utilities\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.736045 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-utilities\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.740285 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-catalog-content\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.740870 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-config" (OuterVolumeSpecName: "config") pod "51e23993-d4b6-4929-b14c-c4a920b35760" (UID: "51e23993-d4b6-4929-b14c-c4a920b35760"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.741751 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-client-ca" (OuterVolumeSpecName: "client-ca") pod "51e23993-d4b6-4929-b14c-c4a920b35760" (UID: "51e23993-d4b6-4929-b14c-c4a920b35760"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.765679 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.766321 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstmx\" (UniqueName: \"kubernetes.io/projected/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-kube-api-access-xstmx\") pod \"community-operators-r9t78\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.773902 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e23993-d4b6-4929-b14c-c4a920b35760-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51e23993-d4b6-4929-b14c-c4a920b35760" (UID: "51e23993-d4b6-4929-b14c-c4a920b35760"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.776354 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e23993-d4b6-4929-b14c-c4a920b35760-kube-api-access-dcq9q" (OuterVolumeSpecName: "kube-api-access-dcq9q") pod "51e23993-d4b6-4929-b14c-c4a920b35760" (UID: "51e23993-d4b6-4929-b14c-c4a920b35760"). InnerVolumeSpecName "kube-api-access-dcq9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.795530 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.811317 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.834670 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9kv24"] Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.835932 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnxtv\" (UniqueName: \"kubernetes.io/projected/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-kube-api-access-bnxtv\") pod \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836010 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-proxy-ca-bundles\") pod \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836057 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-config\") pod \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836089 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-serving-cert\") pod \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836127 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-client-ca\") pod \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\" (UID: \"a380f8cf-f0a1-41d0-ac65-4f664f543f4d\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836410 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836499 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836514 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e23993-d4b6-4929-b14c-c4a920b35760-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836528 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51e23993-d4b6-4929-b14c-c4a920b35760-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836541 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcq9q\" (UniqueName: \"kubernetes.io/projected/51e23993-d4b6-4929-b14c-c4a920b35760-kube-api-access-dcq9q\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.836784 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a380f8cf-f0a1-41d0-ac65-4f664f543f4d" (UID: "a380f8cf-f0a1-41d0-ac65-4f664f543f4d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.837279 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-client-ca" (OuterVolumeSpecName: "client-ca") pod "a380f8cf-f0a1-41d0-ac65-4f664f543f4d" (UID: "a380f8cf-f0a1-41d0-ac65-4f664f543f4d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.837939 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-config" (OuterVolumeSpecName: "config") pod "a380f8cf-f0a1-41d0-ac65-4f664f543f4d" (UID: "a380f8cf-f0a1-41d0-ac65-4f664f543f4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.848057 4941 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.848108 4941 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.848280 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-kube-api-access-bnxtv" (OuterVolumeSpecName: "kube-api-access-bnxtv") pod "a380f8cf-f0a1-41d0-ac65-4f664f543f4d" (UID: "a380f8cf-f0a1-41d0-ac65-4f664f543f4d"). InnerVolumeSpecName "kube-api-access-bnxtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.848290 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a380f8cf-f0a1-41d0-ac65-4f664f543f4d" (UID: "a380f8cf-f0a1-41d0-ac65-4f664f543f4d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: W0227 19:38:06.853506 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81b6db0c_c9b7_4f84_8ec4_c690e0c59788.slice/crio-0284ca04245f00fa0134fe4d1d2e91dd3d225742e2fed48bfb3f5ddeb990eeb9 WatchSource:0}: Error finding container 0284ca04245f00fa0134fe4d1d2e91dd3d225742e2fed48bfb3f5ddeb990eeb9: Status 404 returned error can't find the container with id 0284ca04245f00fa0134fe4d1d2e91dd3d225742e2fed48bfb3f5ddeb990eeb9 Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.885923 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2h46l\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.937264 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533e82fe-cb1a-462c-a5cb-097cce12524e-secret-volume\") pod \"533e82fe-cb1a-462c-a5cb-097cce12524e\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.937310 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpnhs\" (UniqueName: \"kubernetes.io/projected/533e82fe-cb1a-462c-a5cb-097cce12524e-kube-api-access-tpnhs\") pod \"533e82fe-cb1a-462c-a5cb-097cce12524e\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.937345 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume\") pod \"533e82fe-cb1a-462c-a5cb-097cce12524e\" (UID: \"533e82fe-cb1a-462c-a5cb-097cce12524e\") " Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.937606 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnxtv\" (UniqueName: \"kubernetes.io/projected/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-kube-api-access-bnxtv\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.937687 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.937698 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.937706 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.937714 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a380f8cf-f0a1-41d0-ac65-4f664f543f4d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.938771 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume" (OuterVolumeSpecName: "config-volume") pod "533e82fe-cb1a-462c-a5cb-097cce12524e" (UID: "533e82fe-cb1a-462c-a5cb-097cce12524e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.942080 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533e82fe-cb1a-462c-a5cb-097cce12524e-kube-api-access-tpnhs" (OuterVolumeSpecName: "kube-api-access-tpnhs") pod "533e82fe-cb1a-462c-a5cb-097cce12524e" (UID: "533e82fe-cb1a-462c-a5cb-097cce12524e"). InnerVolumeSpecName "kube-api-access-tpnhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:06 crc kubenswrapper[4941]: I0227 19:38:06.942116 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533e82fe-cb1a-462c-a5cb-097cce12524e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "533e82fe-cb1a-462c-a5cb-097cce12524e" (UID: "533e82fe-cb1a-462c-a5cb-097cce12524e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.038318 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r9t78"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.042082 4941 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533e82fe-cb1a-462c-a5cb-097cce12524e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.042118 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpnhs\" (UniqueName: \"kubernetes.io/projected/533e82fe-cb1a-462c-a5cb-097cce12524e-kube-api-access-tpnhs\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.042131 4941 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533e82fe-cb1a-462c-a5cb-097cce12524e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.092907 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xw8nw" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.131296 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.198154 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86d688d665-l876b"] Feb 27 19:38:07 crc kubenswrapper[4941]: E0227 19:38:07.198771 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533e82fe-cb1a-462c-a5cb-097cce12524e" containerName="collect-profiles" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.198864 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="533e82fe-cb1a-462c-a5cb-097cce12524e" containerName="collect-profiles" Feb 27 19:38:07 crc kubenswrapper[4941]: E0227 19:38:07.198946 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e23993-d4b6-4929-b14c-c4a920b35760" containerName="route-controller-manager" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.199018 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e23993-d4b6-4929-b14c-c4a920b35760" containerName="route-controller-manager" Feb 27 19:38:07 crc kubenswrapper[4941]: E0227 19:38:07.199117 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a380f8cf-f0a1-41d0-ac65-4f664f543f4d" containerName="controller-manager" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.199191 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="a380f8cf-f0a1-41d0-ac65-4f664f543f4d" containerName="controller-manager" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.199550 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="533e82fe-cb1a-462c-a5cb-097cce12524e" containerName="collect-profiles" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.199649 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="a380f8cf-f0a1-41d0-ac65-4f664f543f4d" containerName="controller-manager" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.199737 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e23993-d4b6-4929-b14c-c4a920b35760" containerName="route-controller-manager" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.200243 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.211623 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d688d665-l876b"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.248024 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" event={"ID":"533e82fe-cb1a-462c-a5cb-097cce12524e","Type":"ContainerDied","Data":"640b263c2405712895c24a9734bc3d3a25a55fcc9caddc88b09ef29d05c6c5a6"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.248077 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="640b263c2405712895c24a9734bc3d3a25a55fcc9caddc88b09ef29d05c6c5a6" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.248170 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-nkxd5" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.248354 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.250982 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.265654 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" event={"ID":"a380f8cf-f0a1-41d0-ac65-4f664f543f4d","Type":"ContainerDied","Data":"c8d9041e18f0f15b1a143759f6fed0df929df0ffaf10d64dcbe88edc5b5a174b"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.265700 4941 scope.go:117] "RemoveContainer" containerID="b37fc5d9e1e7189d51d431eb4f64da98708aa9ce26e42ceb0634857f852dfc96" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.265828 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xdq5q" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.284138 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.284780 4941 generic.go:334] "Generic (PLEG): container finished" podID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerID="e62f28f6f5a652d6b5ea5bc47cc3e72848abbb3ecbfb94b445ecb985db14d520" exitCode=0 Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.284877 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k84zp" event={"ID":"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a","Type":"ContainerDied","Data":"e62f28f6f5a652d6b5ea5bc47cc3e72848abbb3ecbfb94b445ecb985db14d520"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.284970 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k84zp" event={"ID":"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a","Type":"ContainerStarted","Data":"2ccdc9351b36ef93ce9b150165f87d159297e220018a2d47e4a261699d63be3a"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.298318 4941 generic.go:334] "Generic (PLEG): container finished" podID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerID="4bd71ffc409bb4262bbee3ed86d1540e4c701bc0ec9a8273559d3dd218e83c22" exitCode=0 Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.298379 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48bk" event={"ID":"91aa0e95-3a50-4027-abeb-b8bd2abbcea5","Type":"ContainerDied","Data":"4bd71ffc409bb4262bbee3ed86d1540e4c701bc0ec9a8273559d3dd218e83c22"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.298402 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48bk" event={"ID":"91aa0e95-3a50-4027-abeb-b8bd2abbcea5","Type":"ContainerStarted","Data":"b85e592286db17c405ca2bc2041d5cc4e9bedcdfa9c87e3f7cad074d700864e8"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.302822 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9t78" event={"ID":"5d3d1f1c-429f-4fd3-a28d-089c23afbbba","Type":"ContainerStarted","Data":"cadf5f0eea5854df85489cd50e248ba036beb03362362be2b09daa2cdeb61502"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.304648 4941 generic.go:334] "Generic (PLEG): container finished" podID="51e23993-d4b6-4929-b14c-c4a920b35760" containerID="0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700" exitCode=0 Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.304731 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" event={"ID":"51e23993-d4b6-4929-b14c-c4a920b35760","Type":"ContainerDied","Data":"0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.304767 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.304793 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p" event={"ID":"51e23993-d4b6-4929-b14c-c4a920b35760","Type":"ContainerDied","Data":"e2db4964dff73a389170ce6517ab7e83c1ae77da15ebfec8228b3039d26ef7d4"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.304820 4941 scope.go:117] "RemoveContainer" containerID="0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.306036 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:07 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:07 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:07 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.306076 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.310199 4941 generic.go:334] "Generic (PLEG): container finished" podID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" containerID="d6c564b158435c2dc89b374a38b50666783aa1713a04c16dc6a9da8bf5bd9c88" exitCode=0 Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.310268 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kv24" event={"ID":"81b6db0c-c9b7-4f84-8ec4-c690e0c59788","Type":"ContainerDied","Data":"d6c564b158435c2dc89b374a38b50666783aa1713a04c16dc6a9da8bf5bd9c88"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.310300 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kv24" event={"ID":"81b6db0c-c9b7-4f84-8ec4-c690e0c59788","Type":"ContainerStarted","Data":"0284ca04245f00fa0134fe4d1d2e91dd3d225742e2fed48bfb3f5ddeb990eeb9"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.322971 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdq5q"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.325202 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"98336c71-e738-434a-9824-fbdd0419bd57","Type":"ContainerStarted","Data":"9d512af60e68261d3173de35b25fa23c0d869b4eedb24a9791920ed7ad77b7b3"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.325256 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"98336c71-e738-434a-9824-fbdd0419bd57","Type":"ContainerStarted","Data":"f2907722a1de7ddc94d45909457e6aada284601a0c810949b2e1c2631ed7198e"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.331118 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdq5q"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.331234 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w969m" event={"ID":"aec473dd-8e2a-4af0-98fb-95e442141a92","Type":"ContainerStarted","Data":"04555a3ca768b863bdd0a5e8eeafd05555198ce389b1e649e2d03d43a33d2acc"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.331347 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w969m" event={"ID":"aec473dd-8e2a-4af0-98fb-95e442141a92","Type":"ContainerStarted","Data":"8dd16fc29d6705899b872f110bad5080868ee3b286b389a07ee9619ceda4a386"} Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.348733 4941 scope.go:117] "RemoveContainer" containerID="0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700" Feb 27 19:38:07 crc kubenswrapper[4941]: E0227 19:38:07.349311 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700\": container with ID starting with 0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700 not found: ID does not exist" containerID="0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.349343 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700"} err="failed to get container status \"0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700\": rpc error: code = NotFound desc = could not find container \"0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700\": container with ID starting with 0fdbc7c4633c912cad82728470a5851cd9173700fdc4f0272a96cc331d9be700 not found: ID does not exist" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.351031 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-proxy-ca-bundles\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.351084 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-config\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.351150 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8728292-0583-4068-9926-a5a3f516408f-serving-cert\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.351189 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvcqp\" (UniqueName: \"kubernetes.io/projected/b8728292-0583-4068-9926-a5a3f516408f-kube-api-access-zvcqp\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.351223 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-client-ca\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.373491 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-w969m" podStartSLOduration=12.373445784 podStartE2EDuration="12.373445784s" podCreationTimestamp="2026-02-27 19:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:07.369839742 +0000 UTC m=+205.630980162" watchObservedRunningTime="2026-02-27 19:38:07.373445784 +0000 UTC m=+205.634586204" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.424485 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.427906 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c877p"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.445260 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.445239351 podStartE2EDuration="3.445239351s" podCreationTimestamp="2026-02-27 19:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:07.438298895 +0000 UTC m=+205.699439315" watchObservedRunningTime="2026-02-27 19:38:07.445239351 +0000 UTC m=+205.706379771" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.445398 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2h46l"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.452633 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.452717 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvcqp\" (UniqueName: \"kubernetes.io/projected/b8728292-0583-4068-9926-a5a3f516408f-kube-api-access-zvcqp\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.452748 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-client-ca\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.452788 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-client-ca\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.452886 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac63b9c-3a83-495c-98e9-b18102769926-serving-cert\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.452955 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-proxy-ca-bundles\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.452997 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-config\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.453127 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-config\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.453234 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48jcp\" (UniqueName: \"kubernetes.io/projected/3ac63b9c-3a83-495c-98e9-b18102769926-kube-api-access-48jcp\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.453300 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8728292-0583-4068-9926-a5a3f516408f-serving-cert\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.456818 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-proxy-ca-bundles\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.460724 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-client-ca\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.462565 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-config\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.464956 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8728292-0583-4068-9926-a5a3f516408f-serving-cert\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.464963 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.473829 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvcqp\" (UniqueName: \"kubernetes.io/projected/b8728292-0583-4068-9926-a5a3f516408f-kube-api-access-zvcqp\") pod \"controller-manager-86d688d665-l876b\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.522506 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.556163 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.556255 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-config\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.556280 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.556301 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.556322 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48jcp\" (UniqueName: \"kubernetes.io/projected/3ac63b9c-3a83-495c-98e9-b18102769926-kube-api-access-48jcp\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.556338 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.556393 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-client-ca\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.556419 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac63b9c-3a83-495c-98e9-b18102769926-serving-cert\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.557678 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.558742 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-config\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.559181 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-client-ca\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.560524 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac63b9c-3a83-495c-98e9-b18102769926-serving-cert\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.560720 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.562552 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.572258 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68a8b3ac-f7b7-412b-8c30-96c44ba947c9-metrics-certs\") pod \"network-metrics-daemon-mvmp7\" (UID: \"68a8b3ac-f7b7-412b-8c30-96c44ba947c9\") " pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.578897 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48jcp\" (UniqueName: \"kubernetes.io/projected/3ac63b9c-3a83-495c-98e9-b18102769926-kube-api-access-48jcp\") pod \"route-controller-manager-7d974ffff6-dj7q7\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.583544 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.586769 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.589657 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.594942 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.694740 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mvmp7" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.788816 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d688d665-l876b"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.820034 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.820062 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:38:07 crc kubenswrapper[4941]: W0227 19:38:07.842619 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8728292_0583_4068_9926_a5a3f516408f.slice/crio-95fe716656d62fc9fa2db9c7c5022f79bbefbcebc98f2ddddc5f7dd20f447136 WatchSource:0}: Error finding container 95fe716656d62fc9fa2db9c7c5022f79bbefbcebc98f2ddddc5f7dd20f447136: Status 404 returned error can't find the container with id 95fe716656d62fc9fa2db9c7c5022f79bbefbcebc98f2ddddc5f7dd20f447136 Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.844137 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.848535 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kllrr"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.858082 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.860168 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.862265 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwb9\" (UniqueName: \"kubernetes.io/projected/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-kube-api-access-blwb9\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.862324 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-catalog-content\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.862534 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-utilities\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.864567 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kllrr"] Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.887194 4941 patch_prober.go:28] interesting pod/downloads-7954f5f757-w7qqt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.887208 4941 patch_prober.go:28] interesting pod/downloads-7954f5f757-w7qqt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.887239 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w7qqt" podUID="7ae7c46e-c974-471c-8f96-1dc0fd38e49d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.887255 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w7qqt" podUID="7ae7c46e-c974-471c-8f96-1dc0fd38e49d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.967527 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-utilities\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.967588 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blwb9\" (UniqueName: \"kubernetes.io/projected/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-kube-api-access-blwb9\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.967627 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-catalog-content\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.968058 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-catalog-content\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.968096 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-utilities\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:07 crc kubenswrapper[4941]: I0227 19:38:07.989094 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blwb9\" (UniqueName: \"kubernetes.io/projected/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-kube-api-access-blwb9\") pod \"redhat-marketplace-kllrr\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.030381 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.031263 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.038174 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.039292 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.041038 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.071662 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3675d246-e23b-476d-a459-8ed197d667d4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3675d246-e23b-476d-a459-8ed197d667d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.071873 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3675d246-e23b-476d-a459-8ed197d667d4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3675d246-e23b-476d-a459-8ed197d667d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.118994 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mvmp7"] Feb 27 19:38:08 crc kubenswrapper[4941]: W0227 19:38:08.138417 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-dec762965fb4ae19f6120c9601182958eb9d613227580c8a659ee99261562be0 WatchSource:0}: Error finding container dec762965fb4ae19f6120c9601182958eb9d613227580c8a659ee99261562be0: Status 404 returned error can't find the container with id dec762965fb4ae19f6120c9601182958eb9d613227580c8a659ee99261562be0 Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.173593 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3675d246-e23b-476d-a459-8ed197d667d4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3675d246-e23b-476d-a459-8ed197d667d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.173917 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3675d246-e23b-476d-a459-8ed197d667d4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3675d246-e23b-476d-a459-8ed197d667d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.173710 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3675d246-e23b-476d-a459-8ed197d667d4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3675d246-e23b-476d-a459-8ed197d667d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.180494 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:38:08 crc kubenswrapper[4941]: W0227 19:38:08.201344 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e9186488d9438b5d0db0c3cf2ee2e47777ae1ae4e169791bdf822986791a8a93 WatchSource:0}: Error finding container e9186488d9438b5d0db0c3cf2ee2e47777ae1ae4e169791bdf822986791a8a93: Status 404 returned error can't find the container with id e9186488d9438b5d0db0c3cf2ee2e47777ae1ae4e169791bdf822986791a8a93 Feb 27 19:38:08 crc kubenswrapper[4941]: W0227 19:38:08.203977 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c9f796436b938a6dcb7af51034cff4136db066edb40d1a0fe20c770a2ca7ebbb WatchSource:0}: Error finding container c9f796436b938a6dcb7af51034cff4136db066edb40d1a0fe20c770a2ca7ebbb: Status 404 returned error can't find the container with id c9f796436b938a6dcb7af51034cff4136db066edb40d1a0fe20c770a2ca7ebbb Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.205113 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3675d246-e23b-476d-a459-8ed197d667d4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3675d246-e23b-476d-a459-8ed197d667d4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.238385 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7"] Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.242861 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.242909 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.246208 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t8rmn"] Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.252435 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.263505 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8rmn"] Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.275489 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-catalog-content\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.275565 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-utilities\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.275583 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8246p\" (UniqueName: \"kubernetes.io/projected/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-kube-api-access-8246p\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.279343 4941 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9tm22 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]log ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]etcd ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/max-in-flight-filter ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 27 19:38:08 crc kubenswrapper[4941]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 27 19:38:08 crc kubenswrapper[4941]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/project.openshift.io-projectcache ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/openshift.io-startinformers ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 27 19:38:08 crc kubenswrapper[4941]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 19:38:08 crc kubenswrapper[4941]: livez check failed Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.280034 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" podUID="211c8e09-1aae-466b-8dba-daab4d60d3cd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.301964 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.308824 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:08 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:08 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:08 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.308867 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.336118 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.342609 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-kptkw" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.348316 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.364878 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e9186488d9438b5d0db0c3cf2ee2e47777ae1ae4e169791bdf822986791a8a93"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.377025 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-catalog-content\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.377165 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-utilities\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.377216 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8246p\" (UniqueName: \"kubernetes.io/projected/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-kube-api-access-8246p\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.388347 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-catalog-content\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.389332 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-utilities\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.389573 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.407920 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8246p\" (UniqueName: \"kubernetes.io/projected/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-kube-api-access-8246p\") pod \"redhat-marketplace-t8rmn\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.427342 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" event={"ID":"1e15fd8c-4806-428d-ab5a-d9e99c669eaa","Type":"ContainerStarted","Data":"354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.427404 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" event={"ID":"1e15fd8c-4806-428d-ab5a-d9e99c669eaa","Type":"ContainerStarted","Data":"7aaa1992de4184b21ae463202336db8f0a126e7d4628397fa53cadd1864772c7"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.427759 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.431438 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.431590 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftrsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-n48bk_openshift-marketplace(91aa0e95-3a50-4027-abeb-b8bd2abbcea5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.432924 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-n48bk" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.449131 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" event={"ID":"68a8b3ac-f7b7-412b-8c30-96c44ba947c9","Type":"ContainerStarted","Data":"001a9fdbc4b33c31b958954c80678bb896987ed5c0e05db848f4cc37f81bb6ba"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.449690 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" podStartSLOduration=163.44967335 podStartE2EDuration="2m43.44967335s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:08.444035511 +0000 UTC m=+206.705175921" watchObservedRunningTime="2026-02-27 19:38:08.44967335 +0000 UTC m=+206.710813770" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.457597 4941 generic.go:334] "Generic (PLEG): container finished" podID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" containerID="e8de783b39b847a1ad55382f0a1fb125e965eeeccb613b6f02d24a5107d6f64d" exitCode=0 Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.458617 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9t78" event={"ID":"5d3d1f1c-429f-4fd3-a28d-089c23afbbba","Type":"ContainerDied","Data":"e8de783b39b847a1ad55382f0a1fb125e965eeeccb613b6f02d24a5107d6f64d"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.462905 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.462935 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.464373 4941 patch_prober.go:28] interesting pod/console-f9d7485db-fpgvv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.464419 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fpgvv" podUID="15767381-d283-4c54-8c38-19d68dec9371" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.501452 4941 generic.go:334] "Generic (PLEG): container finished" podID="98336c71-e738-434a-9824-fbdd0419bd57" containerID="9d512af60e68261d3173de35b25fa23c0d869b4eedb24a9791920ed7ad77b7b3" exitCode=0 Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.503731 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e23993-d4b6-4929-b14c-c4a920b35760" path="/var/lib/kubelet/pods/51e23993-d4b6-4929-b14c-c4a920b35760/volumes" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.514773 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.515748 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a380f8cf-f0a1-41d0-ac65-4f664f543f4d" path="/var/lib/kubelet/pods/a380f8cf-f0a1-41d0-ac65-4f664f543f4d/volumes" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.516322 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"98336c71-e738-434a-9824-fbdd0419bd57","Type":"ContainerDied","Data":"9d512af60e68261d3173de35b25fa23c0d869b4eedb24a9791920ed7ad77b7b3"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.516375 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.516444 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv78l" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.516463 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.520304 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t6sr2" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.520883 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" event={"ID":"3ac63b9c-3a83-495c-98e9-b18102769926","Type":"ContainerStarted","Data":"70fba4469d59e096658b181e69bd599ba362857dbbd72c693d163006e54830ef"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.549752 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.566747 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-94d5q" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.591871 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cad2758c37dd89ad28836305f7e348655c559a9ec6412dd1b27ccbca8453f934"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.592525 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dec762965fb4ae19f6120c9601182958eb9d613227580c8a659ee99261562be0"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.593434 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.607805 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c9f796436b938a6dcb7af51034cff4136db066edb40d1a0fe20c770a2ca7ebbb"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.616200 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" event={"ID":"b8728292-0583-4068-9926-a5a3f516408f","Type":"ContainerStarted","Data":"6e632a98d5e3adf2b51cc187b47a14c3b2d3730c50daa10efbebfc1507b2a926"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.616294 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" event={"ID":"b8728292-0583-4068-9926-a5a3f516408f","Type":"ContainerStarted","Data":"95fe716656d62fc9fa2db9c7c5022f79bbefbcebc98f2ddddc5f7dd20f447136"} Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.616751 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.617433 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.617608 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s74z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k84zp_openshift-marketplace(f8ac2de3-f395-4fe8-90f4-a7fa58792f5a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.618882 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-k84zp" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.635589 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vrhrl" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.645014 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.659813 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.683182 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.683432 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6xd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9kv24_openshift-marketplace(81b6db0c-c9b7-4f84-8ec4-c690e0c59788): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:08 crc kubenswrapper[4941]: E0227 19:38:08.685064 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.710717 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" podStartSLOduration=1.71069185 podStartE2EDuration="1.71069185s" podCreationTimestamp="2026-02-27 19:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:08.677011209 +0000 UTC m=+206.938151639" watchObservedRunningTime="2026-02-27 19:38:08.71069185 +0000 UTC m=+206.971832260" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.745665 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kllrr"] Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.887365 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bjr9"] Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.888921 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.904908 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.905008 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.934740 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bjr9"] Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.990160 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-catalog-content\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.990205 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5cs\" (UniqueName: \"kubernetes.io/projected/05415fb4-4075-493f-91c7-a53f30a70618-kube-api-access-5j5cs\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:08 crc kubenswrapper[4941]: I0227 19:38:08.990407 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-utilities\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.020139 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.094318 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-utilities\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.094388 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-catalog-content\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.094418 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5cs\" (UniqueName: \"kubernetes.io/projected/05415fb4-4075-493f-91c7-a53f30a70618-kube-api-access-5j5cs\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.095535 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-utilities\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.095958 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-catalog-content\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.142776 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52304: no serving certificate available for the kubelet" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.163104 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5cs\" (UniqueName: \"kubernetes.io/projected/05415fb4-4075-493f-91c7-a53f30a70618-kube-api-access-5j5cs\") pod \"redhat-operators-4bjr9\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:09 crc kubenswrapper[4941]: E0227 19:38:09.167584 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:38:09 crc kubenswrapper[4941]: E0227 19:38:09.167759 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xstmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r9t78_openshift-marketplace(5d3d1f1c-429f-4fd3-a28d-089c23afbbba): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:09 crc kubenswrapper[4941]: E0227 19:38:09.169098 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.255808 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.281656 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5wzr2"] Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.282712 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.304096 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wzr2"] Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.321137 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:09 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:09 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:09 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.321229 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.416411 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjlfk\" (UniqueName: \"kubernetes.io/projected/e1c166e9-0f02-4929-b623-404b062973fc-kube-api-access-kjlfk\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.416542 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-catalog-content\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.416570 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-utilities\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.484579 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8rmn"] Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.517491 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-catalog-content\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.517842 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-utilities\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.517882 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjlfk\" (UniqueName: \"kubernetes.io/projected/e1c166e9-0f02-4929-b623-404b062973fc-kube-api-access-kjlfk\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.518420 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-catalog-content\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.518459 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-utilities\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.557724 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjlfk\" (UniqueName: \"kubernetes.io/projected/e1c166e9-0f02-4929-b623-404b062973fc-kube-api-access-kjlfk\") pod \"redhat-operators-5wzr2\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.653976 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3675d246-e23b-476d-a459-8ed197d667d4","Type":"ContainerStarted","Data":"43fa74754756d2b5ccab3e904f66e8cee89739906ecb73f76927877eaa45ae5b"} Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.657933 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" event={"ID":"3ac63b9c-3a83-495c-98e9-b18102769926","Type":"ContainerStarted","Data":"e176b4f37d63173a9ad96762e670c657a0ed6cee6b913061811373e966e597c8"} Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.658004 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.662162 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" event={"ID":"68a8b3ac-f7b7-412b-8c30-96c44ba947c9","Type":"ContainerStarted","Data":"525d0a1eb0e9f8418899625f6e9c7e2a8b66db6218dc727149610c19f76be698"} Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.662255 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mvmp7" event={"ID":"68a8b3ac-f7b7-412b-8c30-96c44ba947c9","Type":"ContainerStarted","Data":"86c1d39b642a47eaecdfa6c13e70ac97e5d2938ecf19a611bad575c7b5ce6e5c"} Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.666770 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.677916 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8rmn" event={"ID":"2fbfd7a3-e234-4afe-aaa4-f89d486c7164","Type":"ContainerStarted","Data":"32315a623a92f2d30c955b9b1eac1352abce57abee3d1ffb3915de731068b3b1"} Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.680611 4941 generic.go:334] "Generic (PLEG): container finished" podID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" containerID="a6c9e879e88341ddb92d2a27f9febf8bfb0eedc9c34c806c762be6a439f9c827" exitCode=0 Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.680670 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kllrr" event={"ID":"bd71dd28-494b-4f92-8cf2-f79b4709c6d5","Type":"ContainerDied","Data":"a6c9e879e88341ddb92d2a27f9febf8bfb0eedc9c34c806c762be6a439f9c827"} Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.680691 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kllrr" event={"ID":"bd71dd28-494b-4f92-8cf2-f79b4709c6d5","Type":"ContainerStarted","Data":"27f28222dcc1d2318165080245f46d50da993b09feddb6060ff5f064396df83e"} Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.689430 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"04e9846e3f0cd85e4ee8443b0cd079a2678cf87a5192a15036cedea742c1d858"} Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.697065 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.698349 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" podStartSLOduration=2.698338084 podStartE2EDuration="2.698338084s" podCreationTimestamp="2026-02-27 19:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:09.696701448 +0000 UTC m=+207.957841868" watchObservedRunningTime="2026-02-27 19:38:09.698338084 +0000 UTC m=+207.959478514" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.707613 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ddcdb05c31e7df569ae7dcf12f9416d3663a236a8c9b7248cbf1bcaead1923f"} Feb 27 19:38:09 crc kubenswrapper[4941]: E0227 19:38:09.718748 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-n48bk" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" Feb 27 19:38:09 crc kubenswrapper[4941]: E0227 19:38:09.719052 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:38:09 crc kubenswrapper[4941]: E0227 19:38:09.719051 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:38:09 crc kubenswrapper[4941]: E0227 19:38:09.719119 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k84zp" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.775191 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bjr9"] Feb 27 19:38:09 crc kubenswrapper[4941]: I0227 19:38:09.876694 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mvmp7" podStartSLOduration=164.876670969 podStartE2EDuration="2m44.876670969s" podCreationTimestamp="2026-02-27 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:09.838309586 +0000 UTC m=+208.099450006" watchObservedRunningTime="2026-02-27 19:38:09.876670969 +0000 UTC m=+208.137811389" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.245677 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.307565 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:10 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:10 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:10 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.307652 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.331453 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98336c71-e738-434a-9824-fbdd0419bd57-kube-api-access\") pod \"98336c71-e738-434a-9824-fbdd0419bd57\" (UID: \"98336c71-e738-434a-9824-fbdd0419bd57\") " Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.331554 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98336c71-e738-434a-9824-fbdd0419bd57-kubelet-dir\") pod \"98336c71-e738-434a-9824-fbdd0419bd57\" (UID: \"98336c71-e738-434a-9824-fbdd0419bd57\") " Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.332092 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98336c71-e738-434a-9824-fbdd0419bd57-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98336c71-e738-434a-9824-fbdd0419bd57" (UID: "98336c71-e738-434a-9824-fbdd0419bd57"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.339090 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98336c71-e738-434a-9824-fbdd0419bd57-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98336c71-e738-434a-9824-fbdd0419bd57" (UID: "98336c71-e738-434a-9824-fbdd0419bd57"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.385177 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wzr2"] Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.433570 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98336c71-e738-434a-9824-fbdd0419bd57-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.433629 4941 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98336c71-e738-434a-9824-fbdd0419bd57-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:10 crc kubenswrapper[4941]: W0227 19:38:10.446524 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1c166e9_0f02_4929_b623_404b062973fc.slice/crio-8f25ef1a402c00945b7e89bdd110c365e1b7b38f56bbf66ce011fe072919a856 WatchSource:0}: Error finding container 8f25ef1a402c00945b7e89bdd110c365e1b7b38f56bbf66ce011fe072919a856: Status 404 returned error can't find the container with id 8f25ef1a402c00945b7e89bdd110c365e1b7b38f56bbf66ce011fe072919a856 Feb 27 19:38:10 crc kubenswrapper[4941]: E0227 19:38:10.476414 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:38:10 crc kubenswrapper[4941]: E0227 19:38:10.476661 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blwb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kllrr_openshift-marketplace(bd71dd28-494b-4f92-8cf2-f79b4709c6d5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:10 crc kubenswrapper[4941]: E0227 19:38:10.477895 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.712203 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3675d246-e23b-476d-a459-8ed197d667d4","Type":"ContainerStarted","Data":"a22bfc9a213e38aede3f8f666a530a9317c4eb3d77739a1adf68817db23dbbaf"} Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.713856 4941 generic.go:334] "Generic (PLEG): container finished" podID="05415fb4-4075-493f-91c7-a53f30a70618" containerID="2d7728a079399b7b5244d50ba06fa0652669b8038f0252dd5e0697d4db395a96" exitCode=0 Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.713913 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bjr9" event={"ID":"05415fb4-4075-493f-91c7-a53f30a70618","Type":"ContainerDied","Data":"2d7728a079399b7b5244d50ba06fa0652669b8038f0252dd5e0697d4db395a96"} Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.713934 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bjr9" event={"ID":"05415fb4-4075-493f-91c7-a53f30a70618","Type":"ContainerStarted","Data":"a1bca9ca3b61419f803d8dc44dab2ea336eb4baedf9adebbcf11482b53c6b755"} Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.714926 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wzr2" event={"ID":"e1c166e9-0f02-4929-b623-404b062973fc","Type":"ContainerStarted","Data":"8f25ef1a402c00945b7e89bdd110c365e1b7b38f56bbf66ce011fe072919a856"} Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.716228 4941 generic.go:334] "Generic (PLEG): container finished" podID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerID="ee2aca124e834c751406e526fd18da60e815864998a63cef05982c30a386de52" exitCode=0 Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.716300 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8rmn" event={"ID":"2fbfd7a3-e234-4afe-aaa4-f89d486c7164","Type":"ContainerDied","Data":"ee2aca124e834c751406e526fd18da60e815864998a63cef05982c30a386de52"} Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.718208 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.718256 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"98336c71-e738-434a-9824-fbdd0419bd57","Type":"ContainerDied","Data":"f2907722a1de7ddc94d45909457e6aada284601a0c810949b2e1c2631ed7198e"} Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.718273 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2907722a1de7ddc94d45909457e6aada284601a0c810949b2e1c2631ed7198e" Feb 27 19:38:10 crc kubenswrapper[4941]: E0227 19:38:10.719772 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:38:10 crc kubenswrapper[4941]: I0227 19:38:10.726618 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.726600426 podStartE2EDuration="2.726600426s" podCreationTimestamp="2026-02-27 19:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:10.725006361 +0000 UTC m=+208.986146781" watchObservedRunningTime="2026-02-27 19:38:10.726600426 +0000 UTC m=+208.987740846" Feb 27 19:38:11 crc kubenswrapper[4941]: I0227 19:38:11.307242 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:11 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:11 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:11 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:11 crc kubenswrapper[4941]: I0227 19:38:11.307313 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:11 crc kubenswrapper[4941]: E0227 19:38:11.428353 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:38:11 crc kubenswrapper[4941]: E0227 19:38:11.428640 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8246p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-t8rmn_openshift-marketplace(2fbfd7a3-e234-4afe-aaa4-f89d486c7164): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:11 crc kubenswrapper[4941]: E0227 19:38:11.429896 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-t8rmn" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" Feb 27 19:38:11 crc kubenswrapper[4941]: I0227 19:38:11.724396 4941 generic.go:334] "Generic (PLEG): container finished" podID="3675d246-e23b-476d-a459-8ed197d667d4" containerID="a22bfc9a213e38aede3f8f666a530a9317c4eb3d77739a1adf68817db23dbbaf" exitCode=0 Feb 27 19:38:11 crc kubenswrapper[4941]: I0227 19:38:11.724501 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3675d246-e23b-476d-a459-8ed197d667d4","Type":"ContainerDied","Data":"a22bfc9a213e38aede3f8f666a530a9317c4eb3d77739a1adf68817db23dbbaf"} Feb 27 19:38:11 crc kubenswrapper[4941]: I0227 19:38:11.725869 4941 generic.go:334] "Generic (PLEG): container finished" podID="e1c166e9-0f02-4929-b623-404b062973fc" containerID="676d6abe1124c45d88e41dc4d53bd580fc2222ddbdf6985ec824a407528ea977" exitCode=0 Feb 27 19:38:11 crc kubenswrapper[4941]: I0227 19:38:11.725899 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wzr2" event={"ID":"e1c166e9-0f02-4929-b623-404b062973fc","Type":"ContainerDied","Data":"676d6abe1124c45d88e41dc4d53bd580fc2222ddbdf6985ec824a407528ea977"} Feb 27 19:38:11 crc kubenswrapper[4941]: E0227 19:38:11.728134 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-t8rmn" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" Feb 27 19:38:12 crc kubenswrapper[4941]: I0227 19:38:12.304414 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:12 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:12 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:12 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:12 crc kubenswrapper[4941]: I0227 19:38:12.304542 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:12 crc kubenswrapper[4941]: I0227 19:38:12.346768 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52320: no serving certificate available for the kubelet" Feb 27 19:38:12 crc kubenswrapper[4941]: I0227 19:38:12.972846 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.076010 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3675d246-e23b-476d-a459-8ed197d667d4-kubelet-dir\") pod \"3675d246-e23b-476d-a459-8ed197d667d4\" (UID: \"3675d246-e23b-476d-a459-8ed197d667d4\") " Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.076173 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3675d246-e23b-476d-a459-8ed197d667d4-kube-api-access\") pod \"3675d246-e23b-476d-a459-8ed197d667d4\" (UID: \"3675d246-e23b-476d-a459-8ed197d667d4\") " Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.076161 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3675d246-e23b-476d-a459-8ed197d667d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3675d246-e23b-476d-a459-8ed197d667d4" (UID: "3675d246-e23b-476d-a459-8ed197d667d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.076692 4941 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3675d246-e23b-476d-a459-8ed197d667d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.085063 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3675d246-e23b-476d-a459-8ed197d667d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3675d246-e23b-476d-a459-8ed197d667d4" (UID: "3675d246-e23b-476d-a459-8ed197d667d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.178276 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3675d246-e23b-476d-a459-8ed197d667d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.249416 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.254341 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9tm22" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.320519 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:13 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:13 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:13 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.320587 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.744781 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.744765 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3675d246-e23b-476d-a459-8ed197d667d4","Type":"ContainerDied","Data":"43fa74754756d2b5ccab3e904f66e8cee89739906ecb73f76927877eaa45ae5b"} Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.746009 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43fa74754756d2b5ccab3e904f66e8cee89739906ecb73f76927877eaa45ae5b" Feb 27 19:38:13 crc kubenswrapper[4941]: I0227 19:38:13.906936 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-knckv" Feb 27 19:38:14 crc kubenswrapper[4941]: I0227 19:38:14.291850 4941 ???:1] "http: TLS handshake error from 192.168.126.11:40788: no serving certificate available for the kubelet" Feb 27 19:38:14 crc kubenswrapper[4941]: I0227 19:38:14.306798 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:14 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:14 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:14 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:14 crc kubenswrapper[4941]: I0227 19:38:14.307163 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:15 crc kubenswrapper[4941]: I0227 19:38:15.304853 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:15 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:15 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:15 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:15 crc kubenswrapper[4941]: I0227 19:38:15.306090 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:16 crc kubenswrapper[4941]: I0227 19:38:16.303945 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:16 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:16 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:16 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:16 crc kubenswrapper[4941]: I0227 19:38:16.303998 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:17 crc kubenswrapper[4941]: I0227 19:38:17.303836 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:17 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:17 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:17 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:17 crc kubenswrapper[4941]: I0227 19:38:17.303914 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:17 crc kubenswrapper[4941]: I0227 19:38:17.913032 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-w7qqt" Feb 27 19:38:18 crc kubenswrapper[4941]: I0227 19:38:18.307062 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:18 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:18 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:18 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:18 crc kubenswrapper[4941]: I0227 19:38:18.307110 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:18 crc kubenswrapper[4941]: I0227 19:38:18.462732 4941 patch_prober.go:28] interesting pod/console-f9d7485db-fpgvv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 27 19:38:18 crc kubenswrapper[4941]: I0227 19:38:18.462795 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fpgvv" podUID="15767381-d283-4c54-8c38-19d68dec9371" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 27 19:38:19 crc kubenswrapper[4941]: I0227 19:38:19.303974 4941 patch_prober.go:28] interesting pod/router-default-5444994796-lx7mk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 19:38:19 crc kubenswrapper[4941]: [-]has-synced failed: reason withheld Feb 27 19:38:19 crc kubenswrapper[4941]: [+]process-running ok Feb 27 19:38:19 crc kubenswrapper[4941]: healthz check failed Feb 27 19:38:19 crc kubenswrapper[4941]: I0227 19:38:19.304057 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lx7mk" podUID="07c961ab-73e0-4b79-9d5d-f181aa535bca" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:38:20 crc kubenswrapper[4941]: E0227 19:38:20.241893 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:38:20 crc kubenswrapper[4941]: E0227 19:38:20.242319 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:38:20 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:38:20 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psgk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537018-k4vjk_openshift-infra(0d98f658-1f8e-41f5-bc4e-2f442243e453): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:38:20 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:38:20 crc kubenswrapper[4941]: E0227 19:38:20.243505 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:38:20 crc kubenswrapper[4941]: I0227 19:38:20.305271 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:38:20 crc kubenswrapper[4941]: I0227 19:38:20.308447 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lx7mk" Feb 27 19:38:24 crc kubenswrapper[4941]: I0227 19:38:24.880062 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d688d665-l876b"] Feb 27 19:38:24 crc kubenswrapper[4941]: I0227 19:38:24.880676 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" podUID="b8728292-0583-4068-9926-a5a3f516408f" containerName="controller-manager" containerID="cri-o://6e632a98d5e3adf2b51cc187b47a14c3b2d3730c50daa10efbebfc1507b2a926" gracePeriod=30 Feb 27 19:38:24 crc kubenswrapper[4941]: I0227 19:38:24.895324 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7"] Feb 27 19:38:24 crc kubenswrapper[4941]: I0227 19:38:24.895562 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" podUID="3ac63b9c-3a83-495c-98e9-b18102769926" containerName="route-controller-manager" containerID="cri-o://e176b4f37d63173a9ad96762e670c657a0ed6cee6b913061811373e966e597c8" gracePeriod=30 Feb 27 19:38:26 crc kubenswrapper[4941]: I0227 19:38:26.816735 4941 generic.go:334] "Generic (PLEG): container finished" podID="b8728292-0583-4068-9926-a5a3f516408f" containerID="6e632a98d5e3adf2b51cc187b47a14c3b2d3730c50daa10efbebfc1507b2a926" exitCode=0 Feb 27 19:38:26 crc kubenswrapper[4941]: I0227 19:38:26.816805 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" event={"ID":"b8728292-0583-4068-9926-a5a3f516408f","Type":"ContainerDied","Data":"6e632a98d5e3adf2b51cc187b47a14c3b2d3730c50daa10efbebfc1507b2a926"} Feb 27 19:38:26 crc kubenswrapper[4941]: I0227 19:38:26.818508 4941 generic.go:334] "Generic (PLEG): container finished" podID="3ac63b9c-3a83-495c-98e9-b18102769926" containerID="e176b4f37d63173a9ad96762e670c657a0ed6cee6b913061811373e966e597c8" exitCode=0 Feb 27 19:38:26 crc kubenswrapper[4941]: I0227 19:38:26.818535 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" event={"ID":"3ac63b9c-3a83-495c-98e9-b18102769926","Type":"ContainerDied","Data":"e176b4f37d63173a9ad96762e670c657a0ed6cee6b913061811373e966e597c8"} Feb 27 19:38:27 crc kubenswrapper[4941]: I0227 19:38:27.136859 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:38:27 crc kubenswrapper[4941]: I0227 19:38:27.524098 4941 patch_prober.go:28] interesting pod/controller-manager-86d688d665-l876b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 27 19:38:27 crc kubenswrapper[4941]: I0227 19:38:27.524171 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" podUID="b8728292-0583-4068-9926-a5a3f516408f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 27 19:38:27 crc kubenswrapper[4941]: I0227 19:38:27.588405 4941 patch_prober.go:28] interesting pod/route-controller-manager-7d974ffff6-dj7q7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 27 19:38:27 crc kubenswrapper[4941]: I0227 19:38:27.588509 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" podUID="3ac63b9c-3a83-495c-98e9-b18102769926" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.531941 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.535721 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fpgvv" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.675766 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.675957 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjlfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5wzr2_openshift-marketplace(e1c166e9-0f02-4929-b623-404b062973fc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.677196 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5wzr2" podUID="e1c166e9-0f02-4929-b623-404b062973fc" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.710703 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.767036 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58"] Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.767270 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98336c71-e738-434a-9824-fbdd0419bd57" containerName="pruner" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.767284 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="98336c71-e738-434a-9824-fbdd0419bd57" containerName="pruner" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.767304 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac63b9c-3a83-495c-98e9-b18102769926" containerName="route-controller-manager" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.767311 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac63b9c-3a83-495c-98e9-b18102769926" containerName="route-controller-manager" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.767318 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3675d246-e23b-476d-a459-8ed197d667d4" containerName="pruner" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.767328 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="3675d246-e23b-476d-a459-8ed197d667d4" containerName="pruner" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.767518 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="98336c71-e738-434a-9824-fbdd0419bd57" containerName="pruner" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.767533 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac63b9c-3a83-495c-98e9-b18102769926" containerName="route-controller-manager" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.767546 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="3675d246-e23b-476d-a459-8ed197d667d4" containerName="pruner" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.767936 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.785481 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58"] Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.836826 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" event={"ID":"3ac63b9c-3a83-495c-98e9-b18102769926","Type":"ContainerDied","Data":"70fba4469d59e096658b181e69bd599ba362857dbbd72c693d163006e54830ef"} Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.836891 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.836921 4941 scope.go:117] "RemoveContainer" containerID="e176b4f37d63173a9ad96762e670c657a0ed6cee6b913061811373e966e597c8" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.839786 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5wzr2" podUID="e1c166e9-0f02-4929-b623-404b062973fc" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.853159 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48jcp\" (UniqueName: \"kubernetes.io/projected/3ac63b9c-3a83-495c-98e9-b18102769926-kube-api-access-48jcp\") pod \"3ac63b9c-3a83-495c-98e9-b18102769926\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.853248 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-config\") pod \"3ac63b9c-3a83-495c-98e9-b18102769926\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.853339 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac63b9c-3a83-495c-98e9-b18102769926-serving-cert\") pod \"3ac63b9c-3a83-495c-98e9-b18102769926\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.853371 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-client-ca\") pod \"3ac63b9c-3a83-495c-98e9-b18102769926\" (UID: \"3ac63b9c-3a83-495c-98e9-b18102769926\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.853590 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-client-ca\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.853633 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmlj\" (UniqueName: \"kubernetes.io/projected/2d5c83e7-8d92-4453-b41e-76b48d2dea05-kube-api-access-fcmlj\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.853675 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-config\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.853692 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c83e7-8d92-4453-b41e-76b48d2dea05-serving-cert\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.854446 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-client-ca" (OuterVolumeSpecName: "client-ca") pod "3ac63b9c-3a83-495c-98e9-b18102769926" (UID: "3ac63b9c-3a83-495c-98e9-b18102769926"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.854933 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-config" (OuterVolumeSpecName: "config") pod "3ac63b9c-3a83-495c-98e9-b18102769926" (UID: "3ac63b9c-3a83-495c-98e9-b18102769926"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.859713 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac63b9c-3a83-495c-98e9-b18102769926-kube-api-access-48jcp" (OuterVolumeSpecName: "kube-api-access-48jcp") pod "3ac63b9c-3a83-495c-98e9-b18102769926" (UID: "3ac63b9c-3a83-495c-98e9-b18102769926"). InnerVolumeSpecName "kube-api-access-48jcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.862297 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac63b9c-3a83-495c-98e9-b18102769926-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3ac63b9c-3a83-495c-98e9-b18102769926" (UID: "3ac63b9c-3a83-495c-98e9-b18102769926"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.886600 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.917234 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.917414 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftrsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-n48bk_openshift-marketplace(91aa0e95-3a50-4027-abeb-b8bd2abbcea5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.918632 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-n48bk" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.954855 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-client-ca\") pod \"b8728292-0583-4068-9926-a5a3f516408f\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.954935 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8728292-0583-4068-9926-a5a3f516408f-serving-cert\") pod \"b8728292-0583-4068-9926-a5a3f516408f\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.954970 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-proxy-ca-bundles\") pod \"b8728292-0583-4068-9926-a5a3f516408f\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.954999 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-config\") pod \"b8728292-0583-4068-9926-a5a3f516408f\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955060 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvcqp\" (UniqueName: \"kubernetes.io/projected/b8728292-0583-4068-9926-a5a3f516408f-kube-api-access-zvcqp\") pod \"b8728292-0583-4068-9926-a5a3f516408f\" (UID: \"b8728292-0583-4068-9926-a5a3f516408f\") " Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955223 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcmlj\" (UniqueName: \"kubernetes.io/projected/2d5c83e7-8d92-4453-b41e-76b48d2dea05-kube-api-access-fcmlj\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955339 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-config\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955366 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c83e7-8d92-4453-b41e-76b48d2dea05-serving-cert\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955451 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-client-ca\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955554 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac63b9c-3a83-495c-98e9-b18102769926-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955572 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955586 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48jcp\" (UniqueName: \"kubernetes.io/projected/3ac63b9c-3a83-495c-98e9-b18102769926-kube-api-access-48jcp\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955600 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac63b9c-3a83-495c-98e9-b18102769926-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955819 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8728292-0583-4068-9926-a5a3f516408f" (UID: "b8728292-0583-4068-9926-a5a3f516408f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.955911 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-config" (OuterVolumeSpecName: "config") pod "b8728292-0583-4068-9926-a5a3f516408f" (UID: "b8728292-0583-4068-9926-a5a3f516408f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.956442 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-client-ca\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.956798 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b8728292-0583-4068-9926-a5a3f516408f" (UID: "b8728292-0583-4068-9926-a5a3f516408f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.957490 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-config\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.959731 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8728292-0583-4068-9926-a5a3f516408f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8728292-0583-4068-9926-a5a3f516408f" (UID: "b8728292-0583-4068-9926-a5a3f516408f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.959758 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8728292-0583-4068-9926-a5a3f516408f-kube-api-access-zvcqp" (OuterVolumeSpecName: "kube-api-access-zvcqp") pod "b8728292-0583-4068-9926-a5a3f516408f" (UID: "b8728292-0583-4068-9926-a5a3f516408f"). InnerVolumeSpecName "kube-api-access-zvcqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.960372 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c83e7-8d92-4453-b41e-76b48d2dea05-serving-cert\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: I0227 19:38:28.971322 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcmlj\" (UniqueName: \"kubernetes.io/projected/2d5c83e7-8d92-4453-b41e-76b48d2dea05-kube-api-access-fcmlj\") pod \"route-controller-manager-5fd9f76c67-8rk58\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.978871 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.979059 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6xd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9kv24_openshift-marketplace(81b6db0c-c9b7-4f84-8ec4-c690e0c59788): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.979126 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.979218 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xstmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r9t78_openshift-marketplace(5d3d1f1c-429f-4fd3-a28d-089c23afbbba): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.980248 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:38:28 crc kubenswrapper[4941]: E0227 19:38:28.980314 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.057080 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.057121 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8728292-0583-4068-9926-a5a3f516408f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.057132 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.057142 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8728292-0583-4068-9926-a5a3f516408f-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.057151 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvcqp\" (UniqueName: \"kubernetes.io/projected/b8728292-0583-4068-9926-a5a3f516408f-kube-api-access-zvcqp\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.081433 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.171179 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7"] Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.173851 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d974ffff6-dj7q7"] Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.283926 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58"] Feb 27 19:38:29 crc kubenswrapper[4941]: E0227 19:38:29.666494 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:38:29 crc kubenswrapper[4941]: E0227 19:38:29.667016 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j5cs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4bjr9_openshift-marketplace(05415fb4-4075-493f-91c7-a53f30a70618): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 19:38:29 crc kubenswrapper[4941]: E0227 19:38:29.668227 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4bjr9" podUID="05415fb4-4075-493f-91c7-a53f30a70618" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.842016 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" event={"ID":"b8728292-0583-4068-9926-a5a3f516408f","Type":"ContainerDied","Data":"95fe716656d62fc9fa2db9c7c5022f79bbefbcebc98f2ddddc5f7dd20f447136"} Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.842067 4941 scope.go:117] "RemoveContainer" containerID="6e632a98d5e3adf2b51cc187b47a14c3b2d3730c50daa10efbebfc1507b2a926" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.842136 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d688d665-l876b" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.850002 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" event={"ID":"2d5c83e7-8d92-4453-b41e-76b48d2dea05","Type":"ContainerStarted","Data":"3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45"} Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.850038 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" event={"ID":"2d5c83e7-8d92-4453-b41e-76b48d2dea05","Type":"ContainerStarted","Data":"b085ecc4c759be9b31ee3abec80e3e473d05c3a125d1a3f6cf5f5df9d9417a75"} Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.850694 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.850738 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.870784 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d688d665-l876b"] Feb 27 19:38:29 crc kubenswrapper[4941]: I0227 19:38:29.874521 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86d688d665-l876b"] Feb 27 19:38:30 crc kubenswrapper[4941]: I0227 19:38:30.473939 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac63b9c-3a83-495c-98e9-b18102769926" path="/var/lib/kubelet/pods/3ac63b9c-3a83-495c-98e9-b18102769926/volumes" Feb 27 19:38:30 crc kubenswrapper[4941]: I0227 19:38:30.474885 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8728292-0583-4068-9926-a5a3f516408f" path="/var/lib/kubelet/pods/b8728292-0583-4068-9926-a5a3f516408f/volumes" Feb 27 19:38:30 crc kubenswrapper[4941]: I0227 19:38:30.860415 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:30 crc kubenswrapper[4941]: I0227 19:38:30.866088 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:30 crc kubenswrapper[4941]: I0227 19:38:30.891543 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" podStartSLOduration=6.891510217 podStartE2EDuration="6.891510217s" podCreationTimestamp="2026-02-27 19:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:30.885841877 +0000 UTC m=+229.146982307" watchObservedRunningTime="2026-02-27 19:38:30.891510217 +0000 UTC m=+229.152650647" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.765480 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9fb74957b-br7gv"] Feb 27 19:38:31 crc kubenswrapper[4941]: E0227 19:38:31.765717 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8728292-0583-4068-9926-a5a3f516408f" containerName="controller-manager" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.765729 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8728292-0583-4068-9926-a5a3f516408f" containerName="controller-manager" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.765821 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8728292-0583-4068-9926-a5a3f516408f" containerName="controller-manager" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.766328 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.768928 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.768989 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.770516 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.772429 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.772668 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.772894 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.793949 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9fb74957b-br7gv"] Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.803896 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.891080 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-config\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.891152 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx9bs\" (UniqueName: \"kubernetes.io/projected/a623256a-451f-4c95-8e40-be9feb788557-kube-api-access-nx9bs\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.891255 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-proxy-ca-bundles\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.891300 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a623256a-451f-4c95-8e40-be9feb788557-serving-cert\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.891337 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-client-ca\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.992623 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-config\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.992698 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx9bs\" (UniqueName: \"kubernetes.io/projected/a623256a-451f-4c95-8e40-be9feb788557-kube-api-access-nx9bs\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.992771 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-proxy-ca-bundles\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.992832 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a623256a-451f-4c95-8e40-be9feb788557-serving-cert\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.992861 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-client-ca\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.993825 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-client-ca\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.995133 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-proxy-ca-bundles\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:31 crc kubenswrapper[4941]: I0227 19:38:31.996719 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-config\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:32 crc kubenswrapper[4941]: I0227 19:38:32.000411 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a623256a-451f-4c95-8e40-be9feb788557-serving-cert\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:32 crc kubenswrapper[4941]: I0227 19:38:32.016557 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx9bs\" (UniqueName: \"kubernetes.io/projected/a623256a-451f-4c95-8e40-be9feb788557-kube-api-access-nx9bs\") pod \"controller-manager-9fb74957b-br7gv\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:32 crc kubenswrapper[4941]: I0227 19:38:32.088661 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:32 crc kubenswrapper[4941]: I0227 19:38:32.283119 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9fb74957b-br7gv"] Feb 27 19:38:32 crc kubenswrapper[4941]: I0227 19:38:32.871962 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" event={"ID":"a623256a-451f-4c95-8e40-be9feb788557","Type":"ContainerStarted","Data":"d7f1390608184b1137c9f65c5b304cb40ebba032ba92845b5b54c27148aa90a9"} Feb 27 19:38:33 crc kubenswrapper[4941]: I0227 19:38:33.878336 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" event={"ID":"a623256a-451f-4c95-8e40-be9feb788557","Type":"ContainerStarted","Data":"5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516"} Feb 27 19:38:33 crc kubenswrapper[4941]: I0227 19:38:33.878649 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:33 crc kubenswrapper[4941]: I0227 19:38:33.883340 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:33 crc kubenswrapper[4941]: I0227 19:38:33.896210 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" podStartSLOduration=9.89618898 podStartE2EDuration="9.89618898s" podCreationTimestamp="2026-02-27 19:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:33.893399151 +0000 UTC m=+232.154539571" watchObservedRunningTime="2026-02-27 19:38:33.89618898 +0000 UTC m=+232.157329400" Feb 27 19:38:34 crc kubenswrapper[4941]: I0227 19:38:34.794843 4941 ???:1] "http: TLS handshake error from 192.168.126.11:52910: no serving certificate available for the kubelet" Feb 27 19:38:35 crc kubenswrapper[4941]: E0227 19:38:35.469561 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:38:38 crc kubenswrapper[4941]: I0227 19:38:38.490311 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qhw8d" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.245176 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.246055 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.248731 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.250429 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.250927 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.318646 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75702971-35fd-4ecb-b63e-b9203fa30a1b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"75702971-35fd-4ecb-b63e-b9203fa30a1b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.318788 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75702971-35fd-4ecb-b63e-b9203fa30a1b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"75702971-35fd-4ecb-b63e-b9203fa30a1b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.420164 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75702971-35fd-4ecb-b63e-b9203fa30a1b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"75702971-35fd-4ecb-b63e-b9203fa30a1b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.420272 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75702971-35fd-4ecb-b63e-b9203fa30a1b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"75702971-35fd-4ecb-b63e-b9203fa30a1b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.420376 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75702971-35fd-4ecb-b63e-b9203fa30a1b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"75702971-35fd-4ecb-b63e-b9203fa30a1b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.443988 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75702971-35fd-4ecb-b63e-b9203fa30a1b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"75702971-35fd-4ecb-b63e-b9203fa30a1b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:41 crc kubenswrapper[4941]: I0227 19:38:41.567227 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:43 crc kubenswrapper[4941]: E0227 19:38:43.132691 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:38:43 crc kubenswrapper[4941]: E0227 19:38:43.133282 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blwb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kllrr_openshift-marketplace(bd71dd28-494b-4f92-8cf2-f79b4709c6d5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 19:38:43 crc kubenswrapper[4941]: E0227 19:38:43.135419 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:38:43 crc kubenswrapper[4941]: E0227 19:38:43.144749 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-n48bk" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" Feb 27 19:38:43 crc kubenswrapper[4941]: E0227 19:38:43.144912 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:38:43 crc kubenswrapper[4941]: I0227 19:38:43.577154 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 19:38:43 crc kubenswrapper[4941]: I0227 19:38:43.936392 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"75702971-35fd-4ecb-b63e-b9203fa30a1b","Type":"ContainerStarted","Data":"0fe525eefe8ecdac7db020f2d85949c77190f706c0e46ee5d0b050654e085a51"} Feb 27 19:38:44 crc kubenswrapper[4941]: E0227 19:38:44.468681 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:38:44 crc kubenswrapper[4941]: I0227 19:38:44.926531 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9fb74957b-br7gv"] Feb 27 19:38:44 crc kubenswrapper[4941]: I0227 19:38:44.926872 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" podUID="a623256a-451f-4c95-8e40-be9feb788557" containerName="controller-manager" containerID="cri-o://5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516" gracePeriod=30 Feb 27 19:38:44 crc kubenswrapper[4941]: I0227 19:38:44.944394 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"75702971-35fd-4ecb-b63e-b9203fa30a1b","Type":"ContainerStarted","Data":"355133c8a79d3f6e242e1cbd418eb653d64b412c8edde5c57e8cf8838cf26694"} Feb 27 19:38:44 crc kubenswrapper[4941]: I0227 19:38:44.965146 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.965109897 podStartE2EDuration="3.965109897s" podCreationTimestamp="2026-02-27 19:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:44.958390417 +0000 UTC m=+243.219530857" watchObservedRunningTime="2026-02-27 19:38:44.965109897 +0000 UTC m=+243.226250317" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.036141 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58"] Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.037050 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" podUID="2d5c83e7-8d92-4453-b41e-76b48d2dea05" containerName="route-controller-manager" containerID="cri-o://3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45" gracePeriod=30 Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.540634 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.545860 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.585941 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-client-ca\") pod \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.586065 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a623256a-451f-4c95-8e40-be9feb788557-serving-cert\") pod \"a623256a-451f-4c95-8e40-be9feb788557\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.586096 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-proxy-ca-bundles\") pod \"a623256a-451f-4c95-8e40-be9feb788557\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.586125 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-config\") pod \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.586152 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c83e7-8d92-4453-b41e-76b48d2dea05-serving-cert\") pod \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.586166 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-config\") pod \"a623256a-451f-4c95-8e40-be9feb788557\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.586191 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcmlj\" (UniqueName: \"kubernetes.io/projected/2d5c83e7-8d92-4453-b41e-76b48d2dea05-kube-api-access-fcmlj\") pod \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\" (UID: \"2d5c83e7-8d92-4453-b41e-76b48d2dea05\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.586210 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-client-ca\") pod \"a623256a-451f-4c95-8e40-be9feb788557\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.586245 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx9bs\" (UniqueName: \"kubernetes.io/projected/a623256a-451f-4c95-8e40-be9feb788557-kube-api-access-nx9bs\") pod \"a623256a-451f-4c95-8e40-be9feb788557\" (UID: \"a623256a-451f-4c95-8e40-be9feb788557\") " Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.591580 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a623256a-451f-4c95-8e40-be9feb788557" (UID: "a623256a-451f-4c95-8e40-be9feb788557"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.592074 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-config" (OuterVolumeSpecName: "config") pod "2d5c83e7-8d92-4453-b41e-76b48d2dea05" (UID: "2d5c83e7-8d92-4453-b41e-76b48d2dea05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.592968 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-config" (OuterVolumeSpecName: "config") pod "a623256a-451f-4c95-8e40-be9feb788557" (UID: "a623256a-451f-4c95-8e40-be9feb788557"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.593051 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d5c83e7-8d92-4453-b41e-76b48d2dea05" (UID: "2d5c83e7-8d92-4453-b41e-76b48d2dea05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.593163 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a623256a-451f-4c95-8e40-be9feb788557-kube-api-access-nx9bs" (OuterVolumeSpecName: "kube-api-access-nx9bs") pod "a623256a-451f-4c95-8e40-be9feb788557" (UID: "a623256a-451f-4c95-8e40-be9feb788557"). InnerVolumeSpecName "kube-api-access-nx9bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.594022 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-client-ca" (OuterVolumeSpecName: "client-ca") pod "a623256a-451f-4c95-8e40-be9feb788557" (UID: "a623256a-451f-4c95-8e40-be9feb788557"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.596294 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5c83e7-8d92-4453-b41e-76b48d2dea05-kube-api-access-fcmlj" (OuterVolumeSpecName: "kube-api-access-fcmlj") pod "2d5c83e7-8d92-4453-b41e-76b48d2dea05" (UID: "2d5c83e7-8d92-4453-b41e-76b48d2dea05"). InnerVolumeSpecName "kube-api-access-fcmlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.596555 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c83e7-8d92-4453-b41e-76b48d2dea05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d5c83e7-8d92-4453-b41e-76b48d2dea05" (UID: "2d5c83e7-8d92-4453-b41e-76b48d2dea05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.596655 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a623256a-451f-4c95-8e40-be9feb788557-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a623256a-451f-4c95-8e40-be9feb788557" (UID: "a623256a-451f-4c95-8e40-be9feb788557"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687548 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx9bs\" (UniqueName: \"kubernetes.io/projected/a623256a-451f-4c95-8e40-be9feb788557-kube-api-access-nx9bs\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687580 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687591 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a623256a-451f-4c95-8e40-be9feb788557-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687600 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687610 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5c83e7-8d92-4453-b41e-76b48d2dea05-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687618 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c83e7-8d92-4453-b41e-76b48d2dea05-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687626 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687637 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcmlj\" (UniqueName: \"kubernetes.io/projected/2d5c83e7-8d92-4453-b41e-76b48d2dea05-kube-api-access-fcmlj\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.687644 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a623256a-451f-4c95-8e40-be9feb788557-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.964651 4941 generic.go:334] "Generic (PLEG): container finished" podID="a623256a-451f-4c95-8e40-be9feb788557" containerID="5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516" exitCode=0 Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.964724 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" event={"ID":"a623256a-451f-4c95-8e40-be9feb788557","Type":"ContainerDied","Data":"5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516"} Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.964725 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.964752 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9fb74957b-br7gv" event={"ID":"a623256a-451f-4c95-8e40-be9feb788557","Type":"ContainerDied","Data":"d7f1390608184b1137c9f65c5b304cb40ebba032ba92845b5b54c27148aa90a9"} Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.964771 4941 scope.go:117] "RemoveContainer" containerID="5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.968430 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bjr9" event={"ID":"05415fb4-4075-493f-91c7-a53f30a70618","Type":"ContainerStarted","Data":"52d53d2e4a2038beba916ed8d057d55cffbdefd6224dcc41d4ce4007b0829d8f"} Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.973298 4941 generic.go:334] "Generic (PLEG): container finished" podID="75702971-35fd-4ecb-b63e-b9203fa30a1b" containerID="355133c8a79d3f6e242e1cbd418eb653d64b412c8edde5c57e8cf8838cf26694" exitCode=0 Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.973378 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"75702971-35fd-4ecb-b63e-b9203fa30a1b","Type":"ContainerDied","Data":"355133c8a79d3f6e242e1cbd418eb653d64b412c8edde5c57e8cf8838cf26694"} Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.974900 4941 generic.go:334] "Generic (PLEG): container finished" podID="2d5c83e7-8d92-4453-b41e-76b48d2dea05" containerID="3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45" exitCode=0 Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.974967 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" event={"ID":"2d5c83e7-8d92-4453-b41e-76b48d2dea05","Type":"ContainerDied","Data":"3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45"} Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.974995 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" event={"ID":"2d5c83e7-8d92-4453-b41e-76b48d2dea05","Type":"ContainerDied","Data":"b085ecc4c759be9b31ee3abec80e3e473d05c3a125d1a3f6cf5f5df9d9417a75"} Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.975046 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.977370 4941 generic.go:334] "Generic (PLEG): container finished" podID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerID="ec136525ccdf446c931da7012440a8ee7e543218ad1dc55e23b9e0d53c5b917f" exitCode=0 Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.977432 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k84zp" event={"ID":"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a","Type":"ContainerDied","Data":"ec136525ccdf446c931da7012440a8ee7e543218ad1dc55e23b9e0d53c5b917f"} Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.990717 4941 scope.go:117] "RemoveContainer" containerID="5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516" Feb 27 19:38:45 crc kubenswrapper[4941]: E0227 19:38:45.993204 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516\": container with ID starting with 5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516 not found: ID does not exist" containerID="5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.993263 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516"} err="failed to get container status \"5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516\": rpc error: code = NotFound desc = could not find container \"5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516\": container with ID starting with 5d2b4b8282b2f5d78fdb71939fcd3ccab0aac784e49ba9211c2a0d7b0d3e7516 not found: ID does not exist" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.993296 4941 scope.go:117] "RemoveContainer" containerID="3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45" Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.997813 4941 generic.go:334] "Generic (PLEG): container finished" podID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerID="ad83d1c22521280648a7e939824a67a3b7e1493bea82bf8e641acbd71fb2939a" exitCode=0 Feb 27 19:38:45 crc kubenswrapper[4941]: I0227 19:38:45.998314 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8rmn" event={"ID":"2fbfd7a3-e234-4afe-aaa4-f89d486c7164","Type":"ContainerDied","Data":"ad83d1c22521280648a7e939824a67a3b7e1493bea82bf8e641acbd71fb2939a"} Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.028569 4941 scope.go:117] "RemoveContainer" containerID="3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45" Feb 27 19:38:46 crc kubenswrapper[4941]: E0227 19:38:46.029606 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45\": container with ID starting with 3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45 not found: ID does not exist" containerID="3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.029667 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45"} err="failed to get container status \"3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45\": rpc error: code = NotFound desc = could not find container \"3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45\": container with ID starting with 3b3f9f314f9e75c2bfb35fc54456d9a9e377020f6e1b933d4c1f37eeab9fed45 not found: ID does not exist" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.047250 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9fb74957b-br7gv"] Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.055288 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9fb74957b-br7gv"] Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.081036 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58"] Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.083206 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fd9f76c67-8rk58"] Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.475651 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5c83e7-8d92-4453-b41e-76b48d2dea05" path="/var/lib/kubelet/pods/2d5c83e7-8d92-4453-b41e-76b48d2dea05/volumes" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.476174 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a623256a-451f-4c95-8e40-be9feb788557" path="/var/lib/kubelet/pods/a623256a-451f-4c95-8e40-be9feb788557/volumes" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.770458 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l"] Feb 27 19:38:46 crc kubenswrapper[4941]: E0227 19:38:46.770845 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a623256a-451f-4c95-8e40-be9feb788557" containerName="controller-manager" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.770863 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="a623256a-451f-4c95-8e40-be9feb788557" containerName="controller-manager" Feb 27 19:38:46 crc kubenswrapper[4941]: E0227 19:38:46.770887 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5c83e7-8d92-4453-b41e-76b48d2dea05" containerName="route-controller-manager" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.770896 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5c83e7-8d92-4453-b41e-76b48d2dea05" containerName="route-controller-manager" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.771029 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5c83e7-8d92-4453-b41e-76b48d2dea05" containerName="route-controller-manager" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.771049 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="a623256a-451f-4c95-8e40-be9feb788557" containerName="controller-manager" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.771570 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.773543 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6"] Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.775891 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.776041 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.776295 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.776415 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.777090 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.777096 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.778699 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.781785 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.782197 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.782365 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.782413 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.782674 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.783340 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.799968 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.808874 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l"] Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.811886 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6"] Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.912868 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-config\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.913542 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpjk\" (UniqueName: \"kubernetes.io/projected/622c4d5e-b31c-4462-a13f-367bc798f499-kube-api-access-qfpjk\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.913673 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-client-ca\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.913719 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-proxy-ca-bundles\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.913812 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-config\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.913854 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-client-ca\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.913881 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622c4d5e-b31c-4462-a13f-367bc798f499-serving-cert\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.913979 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bed537c3-9533-4953-a2bf-0df889e8ed81-serving-cert\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:46 crc kubenswrapper[4941]: I0227 19:38:46.914013 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbl4z\" (UniqueName: \"kubernetes.io/projected/bed537c3-9533-4953-a2bf-0df889e8ed81-kube-api-access-tbl4z\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.010208 4941 generic.go:334] "Generic (PLEG): container finished" podID="05415fb4-4075-493f-91c7-a53f30a70618" containerID="52d53d2e4a2038beba916ed8d057d55cffbdefd6224dcc41d4ce4007b0829d8f" exitCode=0 Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.010284 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bjr9" event={"ID":"05415fb4-4075-493f-91c7-a53f30a70618","Type":"ContainerDied","Data":"52d53d2e4a2038beba916ed8d057d55cffbdefd6224dcc41d4ce4007b0829d8f"} Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.013709 4941 generic.go:334] "Generic (PLEG): container finished" podID="e1c166e9-0f02-4929-b623-404b062973fc" containerID="bf10322c20a1ed58988c7c889408ce55b0e0564a09ed3cbc311d4c811f36b397" exitCode=0 Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.013741 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wzr2" event={"ID":"e1c166e9-0f02-4929-b623-404b062973fc","Type":"ContainerDied","Data":"bf10322c20a1ed58988c7c889408ce55b0e0564a09ed3cbc311d4c811f36b397"} Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.014849 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-client-ca\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.014878 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-proxy-ca-bundles\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.014915 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-config\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.014954 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622c4d5e-b31c-4462-a13f-367bc798f499-serving-cert\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.014976 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-client-ca\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.015000 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bed537c3-9533-4953-a2bf-0df889e8ed81-serving-cert\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.015023 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbl4z\" (UniqueName: \"kubernetes.io/projected/bed537c3-9533-4953-a2bf-0df889e8ed81-kube-api-access-tbl4z\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.015063 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-config\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.015108 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfpjk\" (UniqueName: \"kubernetes.io/projected/622c4d5e-b31c-4462-a13f-367bc798f499-kube-api-access-qfpjk\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.016259 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-client-ca\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.016303 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-client-ca\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.017384 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-proxy-ca-bundles\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.018156 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-config\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.018563 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-config\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.024319 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622c4d5e-b31c-4462-a13f-367bc798f499-serving-cert\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.024807 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k84zp" event={"ID":"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a","Type":"ContainerStarted","Data":"e77375d1247ec009ac8a5c3de0bd931bc2582d164df1683ae14211b2e63cd1a5"} Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.026178 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bed537c3-9533-4953-a2bf-0df889e8ed81-serving-cert\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.035906 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbl4z\" (UniqueName: \"kubernetes.io/projected/bed537c3-9533-4953-a2bf-0df889e8ed81-kube-api-access-tbl4z\") pod \"controller-manager-7dfb7b7695-2cfx6\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.035970 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfpjk\" (UniqueName: \"kubernetes.io/projected/622c4d5e-b31c-4462-a13f-367bc798f499-kube-api-access-qfpjk\") pod \"route-controller-manager-cf995b9fc-bjw7l\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.052677 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k84zp" podStartSLOduration=2.786172957 podStartE2EDuration="42.052649536s" podCreationTimestamp="2026-02-27 19:38:05 +0000 UTC" firstStartedPulling="2026-02-27 19:38:07.297099239 +0000 UTC m=+205.558239659" lastFinishedPulling="2026-02-27 19:38:46.563575818 +0000 UTC m=+244.824716238" observedRunningTime="2026-02-27 19:38:47.051924746 +0000 UTC m=+245.313065166" watchObservedRunningTime="2026-02-27 19:38:47.052649536 +0000 UTC m=+245.313789956" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.108942 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.171817 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.338179 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.521230 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75702971-35fd-4ecb-b63e-b9203fa30a1b-kubelet-dir\") pod \"75702971-35fd-4ecb-b63e-b9203fa30a1b\" (UID: \"75702971-35fd-4ecb-b63e-b9203fa30a1b\") " Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.521387 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75702971-35fd-4ecb-b63e-b9203fa30a1b-kube-api-access\") pod \"75702971-35fd-4ecb-b63e-b9203fa30a1b\" (UID: \"75702971-35fd-4ecb-b63e-b9203fa30a1b\") " Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.521621 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75702971-35fd-4ecb-b63e-b9203fa30a1b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "75702971-35fd-4ecb-b63e-b9203fa30a1b" (UID: "75702971-35fd-4ecb-b63e-b9203fa30a1b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.521873 4941 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75702971-35fd-4ecb-b63e-b9203fa30a1b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.529253 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75702971-35fd-4ecb-b63e-b9203fa30a1b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "75702971-35fd-4ecb-b63e-b9203fa30a1b" (UID: "75702971-35fd-4ecb-b63e-b9203fa30a1b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.531942 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l"] Feb 27 19:38:47 crc kubenswrapper[4941]: W0227 19:38:47.537338 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod622c4d5e_b31c_4462_a13f_367bc798f499.slice/crio-3b8742aafc712fd547d19e6f074d830ab0d940a75ad3b303c00e28f215ba8153 WatchSource:0}: Error finding container 3b8742aafc712fd547d19e6f074d830ab0d940a75ad3b303c00e28f215ba8153: Status 404 returned error can't find the container with id 3b8742aafc712fd547d19e6f074d830ab0d940a75ad3b303c00e28f215ba8153 Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.602708 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.626434 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75702971-35fd-4ecb-b63e-b9203fa30a1b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:47 crc kubenswrapper[4941]: W0227 19:38:47.634431 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbed537c3_9533_4953_a2bf_0df889e8ed81.slice/crio-4696013348a1738d5439fcb88aa9fe7950acf41800ba2dd83edb75bb8034bece WatchSource:0}: Error finding container 4696013348a1738d5439fcb88aa9fe7950acf41800ba2dd83edb75bb8034bece: Status 404 returned error can't find the container with id 4696013348a1738d5439fcb88aa9fe7950acf41800ba2dd83edb75bb8034bece Feb 27 19:38:47 crc kubenswrapper[4941]: I0227 19:38:47.635257 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6"] Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.033249 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wzr2" event={"ID":"e1c166e9-0f02-4929-b623-404b062973fc","Type":"ContainerStarted","Data":"e2d185a7a530c4a1a8675f265861df3cde5fb945066dcab33bbc201edeee30e3"} Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.036879 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8rmn" event={"ID":"2fbfd7a3-e234-4afe-aaa4-f89d486c7164","Type":"ContainerStarted","Data":"c3aa2a6d962c8c30eeb614f3bc4e9d4c973a1793a5d53112b2f86150cb756877"} Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.038601 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" event={"ID":"622c4d5e-b31c-4462-a13f-367bc798f499","Type":"ContainerStarted","Data":"d99ab26e707e561722d3774beed219dc98e346905ebe638855a373e3fae5a176"} Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.038980 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" event={"ID":"622c4d5e-b31c-4462-a13f-367bc798f499","Type":"ContainerStarted","Data":"3b8742aafc712fd547d19e6f074d830ab0d940a75ad3b303c00e28f215ba8153"} Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.039004 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.040688 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bjr9" event={"ID":"05415fb4-4075-493f-91c7-a53f30a70618","Type":"ContainerStarted","Data":"63a678bba0bf66a4eb5e3bc0fc0e3a3d5e7e5748fa4d3e7d38135a6b816a32c0"} Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.041946 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.043660 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"75702971-35fd-4ecb-b63e-b9203fa30a1b","Type":"ContainerDied","Data":"0fe525eefe8ecdac7db020f2d85949c77190f706c0e46ee5d0b050654e085a51"} Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.043762 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe525eefe8ecdac7db020f2d85949c77190f706c0e46ee5d0b050654e085a51" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.044759 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" event={"ID":"bed537c3-9533-4953-a2bf-0df889e8ed81","Type":"ContainerStarted","Data":"1c94b225921c519cf2deadb15fa03b9481d5061ac53e6c63317d9b3124b9d92a"} Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.044848 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" event={"ID":"bed537c3-9533-4953-a2bf-0df889e8ed81","Type":"ContainerStarted","Data":"4696013348a1738d5439fcb88aa9fe7950acf41800ba2dd83edb75bb8034bece"} Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.045570 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.057184 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5wzr2" podStartSLOduration=3.198208594 podStartE2EDuration="39.057165868s" podCreationTimestamp="2026-02-27 19:38:09 +0000 UTC" firstStartedPulling="2026-02-27 19:38:11.728334068 +0000 UTC m=+209.989474478" lastFinishedPulling="2026-02-27 19:38:47.587291332 +0000 UTC m=+245.848431752" observedRunningTime="2026-02-27 19:38:48.055721677 +0000 UTC m=+246.316862097" watchObservedRunningTime="2026-02-27 19:38:48.057165868 +0000 UTC m=+246.318306288" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.082838 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.090587 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" podStartSLOduration=4.090573581 podStartE2EDuration="4.090573581s" podCreationTimestamp="2026-02-27 19:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:48.090102328 +0000 UTC m=+246.351242748" watchObservedRunningTime="2026-02-27 19:38:48.090573581 +0000 UTC m=+246.351714001" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.114808 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t8rmn" podStartSLOduration=3.70992537 podStartE2EDuration="40.114786575s" podCreationTimestamp="2026-02-27 19:38:08 +0000 UTC" firstStartedPulling="2026-02-27 19:38:10.7192979 +0000 UTC m=+208.980438330" lastFinishedPulling="2026-02-27 19:38:47.124159125 +0000 UTC m=+245.385299535" observedRunningTime="2026-02-27 19:38:48.112575762 +0000 UTC m=+246.373716182" watchObservedRunningTime="2026-02-27 19:38:48.114786575 +0000 UTC m=+246.375926995" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.151619 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bjr9" podStartSLOduration=4.209169104 podStartE2EDuration="40.151603554s" podCreationTimestamp="2026-02-27 19:38:08 +0000 UTC" firstStartedPulling="2026-02-27 19:38:11.728795831 +0000 UTC m=+209.989936251" lastFinishedPulling="2026-02-27 19:38:47.671230281 +0000 UTC m=+245.932370701" observedRunningTime="2026-02-27 19:38:48.150222425 +0000 UTC m=+246.411362845" watchObservedRunningTime="2026-02-27 19:38:48.151603554 +0000 UTC m=+246.412743974" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.175110 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" podStartSLOduration=3.175090358 podStartE2EDuration="3.175090358s" podCreationTimestamp="2026-02-27 19:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:48.172879815 +0000 UTC m=+246.434020235" watchObservedRunningTime="2026-02-27 19:38:48.175090358 +0000 UTC m=+246.436230768" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.185182 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.628632 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 19:38:48 crc kubenswrapper[4941]: E0227 19:38:48.629280 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75702971-35fd-4ecb-b63e-b9203fa30a1b" containerName="pruner" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.629302 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="75702971-35fd-4ecb-b63e-b9203fa30a1b" containerName="pruner" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.629420 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="75702971-35fd-4ecb-b63e-b9203fa30a1b" containerName="pruner" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.629917 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.631666 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.632519 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.642605 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.677833 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.677879 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.741540 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.741733 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-var-lock\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.741764 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.842483 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-var-lock\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.842541 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.842603 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.842705 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.842899 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-var-lock\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.864108 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:48 crc kubenswrapper[4941]: I0227 19:38:48.944998 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:38:49 crc kubenswrapper[4941]: I0227 19:38:49.257324 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:49 crc kubenswrapper[4941]: I0227 19:38:49.257863 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:49 crc kubenswrapper[4941]: I0227 19:38:49.446389 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 19:38:49 crc kubenswrapper[4941]: W0227 19:38:49.451583 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode21ea716_ea1a_46b4_93be_5be9ca44cccb.slice/crio-5c913dfca8f8aaad5a2c8de5bfd0bfaeec9204facb1f19f95677526a9655e7ee WatchSource:0}: Error finding container 5c913dfca8f8aaad5a2c8de5bfd0bfaeec9204facb1f19f95677526a9655e7ee: Status 404 returned error can't find the container with id 5c913dfca8f8aaad5a2c8de5bfd0bfaeec9204facb1f19f95677526a9655e7ee Feb 27 19:38:49 crc kubenswrapper[4941]: I0227 19:38:49.701974 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:49 crc kubenswrapper[4941]: I0227 19:38:49.702337 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:49 crc kubenswrapper[4941]: I0227 19:38:49.877253 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-t8rmn" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="registry-server" probeResult="failure" output=< Feb 27 19:38:49 crc kubenswrapper[4941]: timeout: failed to connect service ":50051" within 1s Feb 27 19:38:49 crc kubenswrapper[4941]: > Feb 27 19:38:50 crc kubenswrapper[4941]: I0227 19:38:50.059427 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e21ea716-ea1a-46b4-93be-5be9ca44cccb","Type":"ContainerStarted","Data":"9c11321b0c4e2d766b4d506b4c70643a7f52b439bedbd5100dc50c1a6c89f4fc"} Feb 27 19:38:50 crc kubenswrapper[4941]: I0227 19:38:50.059481 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e21ea716-ea1a-46b4-93be-5be9ca44cccb","Type":"ContainerStarted","Data":"5c913dfca8f8aaad5a2c8de5bfd0bfaeec9204facb1f19f95677526a9655e7ee"} Feb 27 19:38:50 crc kubenswrapper[4941]: I0227 19:38:50.078831 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.078797936 podStartE2EDuration="2.078797936s" podCreationTimestamp="2026-02-27 19:38:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:38:50.074553336 +0000 UTC m=+248.335693766" watchObservedRunningTime="2026-02-27 19:38:50.078797936 +0000 UTC m=+248.339938356" Feb 27 19:38:50 crc kubenswrapper[4941]: I0227 19:38:50.299396 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bjr9" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="registry-server" probeResult="failure" output=< Feb 27 19:38:50 crc kubenswrapper[4941]: timeout: failed to connect service ":50051" within 1s Feb 27 19:38:50 crc kubenswrapper[4941]: > Feb 27 19:38:50 crc kubenswrapper[4941]: I0227 19:38:50.748316 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5wzr2" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="registry-server" probeResult="failure" output=< Feb 27 19:38:50 crc kubenswrapper[4941]: timeout: failed to connect service ":50051" within 1s Feb 27 19:38:50 crc kubenswrapper[4941]: > Feb 27 19:38:55 crc kubenswrapper[4941]: E0227 19:38:55.152517 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:38:55 crc kubenswrapper[4941]: E0227 19:38:55.153682 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6xd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9kv24_openshift-marketplace(81b6db0c-c9b7-4f84-8ec4-c690e0c59788): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:38:55 crc kubenswrapper[4941]: E0227 19:38:55.154966 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:38:55 crc kubenswrapper[4941]: I0227 19:38:55.766266 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:55 crc kubenswrapper[4941]: I0227 19:38:55.766346 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:55 crc kubenswrapper[4941]: I0227 19:38:55.818873 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:56 crc kubenswrapper[4941]: I0227 19:38:56.149154 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:38:56 crc kubenswrapper[4941]: I0227 19:38:56.896919 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k84zp"] Feb 27 19:38:58 crc kubenswrapper[4941]: I0227 19:38:58.116556 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k84zp" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerName="registry-server" containerID="cri-o://e77375d1247ec009ac8a5c3de0bd931bc2582d164df1683ae14211b2e63cd1a5" gracePeriod=2 Feb 27 19:38:58 crc kubenswrapper[4941]: I0227 19:38:58.724967 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:58 crc kubenswrapper[4941]: I0227 19:38:58.762944 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:38:59 crc kubenswrapper[4941]: I0227 19:38:59.298762 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:59 crc kubenswrapper[4941]: I0227 19:38:59.341987 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:38:59 crc kubenswrapper[4941]: I0227 19:38:59.759160 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:59 crc kubenswrapper[4941]: I0227 19:38:59.819273 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:38:59 crc kubenswrapper[4941]: I0227 19:38:59.850974 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:38:59 crc kubenswrapper[4941]: I0227 19:38:59.851075 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:39:00 crc kubenswrapper[4941]: I0227 19:39:00.698934 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8rmn"] Feb 27 19:39:00 crc kubenswrapper[4941]: I0227 19:39:00.699176 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t8rmn" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="registry-server" containerID="cri-o://c3aa2a6d962c8c30eeb614f3bc4e9d4c973a1793a5d53112b2f86150cb756877" gracePeriod=2 Feb 27 19:39:01 crc kubenswrapper[4941]: I0227 19:39:01.134657 4941 generic.go:334] "Generic (PLEG): container finished" podID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerID="e77375d1247ec009ac8a5c3de0bd931bc2582d164df1683ae14211b2e63cd1a5" exitCode=0 Feb 27 19:39:01 crc kubenswrapper[4941]: I0227 19:39:01.134708 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k84zp" event={"ID":"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a","Type":"ContainerDied","Data":"e77375d1247ec009ac8a5c3de0bd931bc2582d164df1683ae14211b2e63cd1a5"} Feb 27 19:39:01 crc kubenswrapper[4941]: I0227 19:39:01.698230 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5wzr2"] Feb 27 19:39:01 crc kubenswrapper[4941]: I0227 19:39:01.698787 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5wzr2" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="registry-server" containerID="cri-o://e2d185a7a530c4a1a8675f265861df3cde5fb945066dcab33bbc201edeee30e3" gracePeriod=2 Feb 27 19:39:02 crc kubenswrapper[4941]: I0227 19:39:02.141970 4941 generic.go:334] "Generic (PLEG): container finished" podID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerID="c3aa2a6d962c8c30eeb614f3bc4e9d4c973a1793a5d53112b2f86150cb756877" exitCode=0 Feb 27 19:39:02 crc kubenswrapper[4941]: I0227 19:39:02.142032 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8rmn" event={"ID":"2fbfd7a3-e234-4afe-aaa4-f89d486c7164","Type":"ContainerDied","Data":"c3aa2a6d962c8c30eeb614f3bc4e9d4c973a1793a5d53112b2f86150cb756877"} Feb 27 19:39:02 crc kubenswrapper[4941]: E0227 19:39:02.309612 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:39:02 crc kubenswrapper[4941]: E0227 19:39:02.309783 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xstmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r9t78_openshift-marketplace(5d3d1f1c-429f-4fd3-a28d-089c23afbbba): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:02 crc kubenswrapper[4941]: E0227 19:39:02.310975 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.154438 4941 generic.go:334] "Generic (PLEG): container finished" podID="e1c166e9-0f02-4929-b623-404b062973fc" containerID="e2d185a7a530c4a1a8675f265861df3cde5fb945066dcab33bbc201edeee30e3" exitCode=0 Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.154510 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wzr2" event={"ID":"e1c166e9-0f02-4929-b623-404b062973fc","Type":"ContainerDied","Data":"e2d185a7a530c4a1a8675f265861df3cde5fb945066dcab33bbc201edeee30e3"} Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.782540 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.786980 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:39:04 crc kubenswrapper[4941]: E0227 19:39:04.846630 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:39:04 crc kubenswrapper[4941]: E0227 19:39:04.846768 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:39:04 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:39:04 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psgk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537018-k4vjk_openshift-infra(0d98f658-1f8e-41f5-bc4e-2f442243e453): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 27 19:39:04 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:39:04 crc kubenswrapper[4941]: E0227 19:39:04.848809 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.869672 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-catalog-content\") pod \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.869819 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-catalog-content\") pod \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.869879 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8246p\" (UniqueName: \"kubernetes.io/projected/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-kube-api-access-8246p\") pod \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.869910 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s74z8\" (UniqueName: \"kubernetes.io/projected/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-kube-api-access-s74z8\") pod \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.869967 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-utilities\") pod \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\" (UID: \"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a\") " Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.869986 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-utilities\") pod \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\" (UID: \"2fbfd7a3-e234-4afe-aaa4-f89d486c7164\") " Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.870766 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-utilities" (OuterVolumeSpecName: "utilities") pod "f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" (UID: "f8ac2de3-f395-4fe8-90f4-a7fa58792f5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.871012 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-utilities" (OuterVolumeSpecName: "utilities") pod "2fbfd7a3-e234-4afe-aaa4-f89d486c7164" (UID: "2fbfd7a3-e234-4afe-aaa4-f89d486c7164"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.876805 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-kube-api-access-8246p" (OuterVolumeSpecName: "kube-api-access-8246p") pod "2fbfd7a3-e234-4afe-aaa4-f89d486c7164" (UID: "2fbfd7a3-e234-4afe-aaa4-f89d486c7164"). InnerVolumeSpecName "kube-api-access-8246p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.880633 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-kube-api-access-s74z8" (OuterVolumeSpecName: "kube-api-access-s74z8") pod "f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" (UID: "f8ac2de3-f395-4fe8-90f4-a7fa58792f5a"). InnerVolumeSpecName "kube-api-access-s74z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.903466 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fbfd7a3-e234-4afe-aaa4-f89d486c7164" (UID: "2fbfd7a3-e234-4afe-aaa4-f89d486c7164"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.936110 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6"] Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.936407 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" podUID="bed537c3-9533-4953-a2bf-0df889e8ed81" containerName="controller-manager" containerID="cri-o://1c94b225921c519cf2deadb15fa03b9481d5061ac53e6c63317d9b3124b9d92a" gracePeriod=30 Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.946558 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l"] Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.947183 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" podUID="622c4d5e-b31c-4462-a13f-367bc798f499" containerName="route-controller-manager" containerID="cri-o://d99ab26e707e561722d3774beed219dc98e346905ebe638855a373e3fae5a176" gracePeriod=30 Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.967553 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" (UID: "f8ac2de3-f395-4fe8-90f4-a7fa58792f5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.971817 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8246p\" (UniqueName: \"kubernetes.io/projected/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-kube-api-access-8246p\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.971882 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s74z8\" (UniqueName: \"kubernetes.io/projected/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-kube-api-access-s74z8\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.971898 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.971913 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.971927 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd7a3-e234-4afe-aaa4-f89d486c7164-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:04 crc kubenswrapper[4941]: I0227 19:39:04.971939 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.164993 4941 generic.go:334] "Generic (PLEG): container finished" podID="bed537c3-9533-4953-a2bf-0df889e8ed81" containerID="1c94b225921c519cf2deadb15fa03b9481d5061ac53e6c63317d9b3124b9d92a" exitCode=0 Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.165086 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" event={"ID":"bed537c3-9533-4953-a2bf-0df889e8ed81","Type":"ContainerDied","Data":"1c94b225921c519cf2deadb15fa03b9481d5061ac53e6c63317d9b3124b9d92a"} Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.168324 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k84zp" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.168322 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k84zp" event={"ID":"f8ac2de3-f395-4fe8-90f4-a7fa58792f5a","Type":"ContainerDied","Data":"2ccdc9351b36ef93ce9b150165f87d159297e220018a2d47e4a261699d63be3a"} Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.168428 4941 scope.go:117] "RemoveContainer" containerID="e77375d1247ec009ac8a5c3de0bd931bc2582d164df1683ae14211b2e63cd1a5" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.170765 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8rmn" event={"ID":"2fbfd7a3-e234-4afe-aaa4-f89d486c7164","Type":"ContainerDied","Data":"32315a623a92f2d30c955b9b1eac1352abce57abee3d1ffb3915de731068b3b1"} Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.170875 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8rmn" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.175459 4941 generic.go:334] "Generic (PLEG): container finished" podID="622c4d5e-b31c-4462-a13f-367bc798f499" containerID="d99ab26e707e561722d3774beed219dc98e346905ebe638855a373e3fae5a176" exitCode=0 Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.175520 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" event={"ID":"622c4d5e-b31c-4462-a13f-367bc798f499","Type":"ContainerDied","Data":"d99ab26e707e561722d3774beed219dc98e346905ebe638855a373e3fae5a176"} Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.233552 4941 scope.go:117] "RemoveContainer" containerID="ec136525ccdf446c931da7012440a8ee7e543218ad1dc55e23b9e0d53c5b917f" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.266647 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k84zp"] Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.270189 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k84zp"] Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.273288 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.279592 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8rmn"] Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.285020 4941 scope.go:117] "RemoveContainer" containerID="e62f28f6f5a652d6b5ea5bc47cc3e72848abbb3ecbfb94b445ecb985db14d520" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.285558 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8rmn"] Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.313001 4941 scope.go:117] "RemoveContainer" containerID="c3aa2a6d962c8c30eeb614f3bc4e9d4c973a1793a5d53112b2f86150cb756877" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.329288 4941 scope.go:117] "RemoveContainer" containerID="ad83d1c22521280648a7e939824a67a3b7e1493bea82bf8e641acbd71fb2939a" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.343169 4941 scope.go:117] "RemoveContainer" containerID="ee2aca124e834c751406e526fd18da60e815864998a63cef05982c30a386de52" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.376070 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-catalog-content\") pod \"e1c166e9-0f02-4929-b623-404b062973fc\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.376205 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjlfk\" (UniqueName: \"kubernetes.io/projected/e1c166e9-0f02-4929-b623-404b062973fc-kube-api-access-kjlfk\") pod \"e1c166e9-0f02-4929-b623-404b062973fc\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.376279 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-utilities\") pod \"e1c166e9-0f02-4929-b623-404b062973fc\" (UID: \"e1c166e9-0f02-4929-b623-404b062973fc\") " Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.377289 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-utilities" (OuterVolumeSpecName: "utilities") pod "e1c166e9-0f02-4929-b623-404b062973fc" (UID: "e1c166e9-0f02-4929-b623-404b062973fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.381767 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c166e9-0f02-4929-b623-404b062973fc-kube-api-access-kjlfk" (OuterVolumeSpecName: "kube-api-access-kjlfk") pod "e1c166e9-0f02-4929-b623-404b062973fc" (UID: "e1c166e9-0f02-4929-b623-404b062973fc"). InnerVolumeSpecName "kube-api-access-kjlfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.477529 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.477581 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjlfk\" (UniqueName: \"kubernetes.io/projected/e1c166e9-0f02-4929-b623-404b062973fc-kube-api-access-kjlfk\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.492720 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1c166e9-0f02-4929-b623-404b062973fc" (UID: "e1c166e9-0f02-4929-b623-404b062973fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:39:05 crc kubenswrapper[4941]: I0227 19:39:05.579452 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c166e9-0f02-4929-b623-404b062973fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:05 crc kubenswrapper[4941]: E0227 19:39:05.754505 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:39:05 crc kubenswrapper[4941]: E0227 19:39:05.754667 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blwb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kllrr_openshift-marketplace(bd71dd28-494b-4f92-8cf2-f79b4709c6d5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:05 crc kubenswrapper[4941]: E0227 19:39:05.755976 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.074657 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.182092 4941 generic.go:334] "Generic (PLEG): container finished" podID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerID="cd345431c7d3fc2bf96b0657fb57454fecb6a77ac08cc0e481421d0caca5ea4a" exitCode=0 Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.182158 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48bk" event={"ID":"91aa0e95-3a50-4027-abeb-b8bd2abbcea5","Type":"ContainerDied","Data":"cd345431c7d3fc2bf96b0657fb57454fecb6a77ac08cc0e481421d0caca5ea4a"} Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.186371 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622c4d5e-b31c-4462-a13f-367bc798f499-serving-cert\") pod \"622c4d5e-b31c-4462-a13f-367bc798f499\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.186411 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-config\") pod \"622c4d5e-b31c-4462-a13f-367bc798f499\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.186490 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfpjk\" (UniqueName: \"kubernetes.io/projected/622c4d5e-b31c-4462-a13f-367bc798f499-kube-api-access-qfpjk\") pod \"622c4d5e-b31c-4462-a13f-367bc798f499\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.186509 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-client-ca\") pod \"622c4d5e-b31c-4462-a13f-367bc798f499\" (UID: \"622c4d5e-b31c-4462-a13f-367bc798f499\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.186741 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" event={"ID":"622c4d5e-b31c-4462-a13f-367bc798f499","Type":"ContainerDied","Data":"3b8742aafc712fd547d19e6f074d830ab0d940a75ad3b303c00e28f215ba8153"} Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.186792 4941 scope.go:117] "RemoveContainer" containerID="d99ab26e707e561722d3774beed219dc98e346905ebe638855a373e3fae5a176" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.186904 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.187366 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-client-ca" (OuterVolumeSpecName: "client-ca") pod "622c4d5e-b31c-4462-a13f-367bc798f499" (UID: "622c4d5e-b31c-4462-a13f-367bc798f499"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.187727 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-config" (OuterVolumeSpecName: "config") pod "622c4d5e-b31c-4462-a13f-367bc798f499" (UID: "622c4d5e-b31c-4462-a13f-367bc798f499"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.190924 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wzr2" event={"ID":"e1c166e9-0f02-4929-b623-404b062973fc","Type":"ContainerDied","Data":"8f25ef1a402c00945b7e89bdd110c365e1b7b38f56bbf66ce011fe072919a856"} Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.190955 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wzr2" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.191595 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622c4d5e-b31c-4462-a13f-367bc798f499-kube-api-access-qfpjk" (OuterVolumeSpecName: "kube-api-access-qfpjk") pod "622c4d5e-b31c-4462-a13f-367bc798f499" (UID: "622c4d5e-b31c-4462-a13f-367bc798f499"). InnerVolumeSpecName "kube-api-access-qfpjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.191796 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/622c4d5e-b31c-4462-a13f-367bc798f499-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "622c4d5e-b31c-4462-a13f-367bc798f499" (UID: "622c4d5e-b31c-4462-a13f-367bc798f499"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.239514 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5wzr2"] Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.246865 4941 scope.go:117] "RemoveContainer" containerID="e2d185a7a530c4a1a8675f265861df3cde5fb945066dcab33bbc201edeee30e3" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.248330 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5wzr2"] Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.264047 4941 scope.go:117] "RemoveContainer" containerID="bf10322c20a1ed58988c7c889408ce55b0e0564a09ed3cbc311d4c811f36b397" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.284085 4941 scope.go:117] "RemoveContainer" containerID="676d6abe1124c45d88e41dc4d53bd580fc2222ddbdf6985ec824a407528ea977" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.288034 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622c4d5e-b31c-4462-a13f-367bc798f499-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.288056 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.288068 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfpjk\" (UniqueName: \"kubernetes.io/projected/622c4d5e-b31c-4462-a13f-367bc798f499-kube-api-access-qfpjk\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.288079 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622c4d5e-b31c-4462-a13f-367bc798f499-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.295598 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.388673 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-client-ca\") pod \"bed537c3-9533-4953-a2bf-0df889e8ed81\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.388729 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bed537c3-9533-4953-a2bf-0df889e8ed81-serving-cert\") pod \"bed537c3-9533-4953-a2bf-0df889e8ed81\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.388829 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-config\") pod \"bed537c3-9533-4953-a2bf-0df889e8ed81\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.388864 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-proxy-ca-bundles\") pod \"bed537c3-9533-4953-a2bf-0df889e8ed81\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.388894 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbl4z\" (UniqueName: \"kubernetes.io/projected/bed537c3-9533-4953-a2bf-0df889e8ed81-kube-api-access-tbl4z\") pod \"bed537c3-9533-4953-a2bf-0df889e8ed81\" (UID: \"bed537c3-9533-4953-a2bf-0df889e8ed81\") " Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.389344 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-client-ca" (OuterVolumeSpecName: "client-ca") pod "bed537c3-9533-4953-a2bf-0df889e8ed81" (UID: "bed537c3-9533-4953-a2bf-0df889e8ed81"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.389529 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-config" (OuterVolumeSpecName: "config") pod "bed537c3-9533-4953-a2bf-0df889e8ed81" (UID: "bed537c3-9533-4953-a2bf-0df889e8ed81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.389593 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bed537c3-9533-4953-a2bf-0df889e8ed81" (UID: "bed537c3-9533-4953-a2bf-0df889e8ed81"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.392298 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed537c3-9533-4953-a2bf-0df889e8ed81-kube-api-access-tbl4z" (OuterVolumeSpecName: "kube-api-access-tbl4z") pod "bed537c3-9533-4953-a2bf-0df889e8ed81" (UID: "bed537c3-9533-4953-a2bf-0df889e8ed81"). InnerVolumeSpecName "kube-api-access-tbl4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.392333 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed537c3-9533-4953-a2bf-0df889e8ed81-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bed537c3-9533-4953-a2bf-0df889e8ed81" (UID: "bed537c3-9533-4953-a2bf-0df889e8ed81"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.473100 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" path="/var/lib/kubelet/pods/2fbfd7a3-e234-4afe-aaa4-f89d486c7164/volumes" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.473695 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c166e9-0f02-4929-b623-404b062973fc" path="/var/lib/kubelet/pods/e1c166e9-0f02-4929-b623-404b062973fc/volumes" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.474230 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" path="/var/lib/kubelet/pods/f8ac2de3-f395-4fe8-90f4-a7fa58792f5a/volumes" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.490036 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.490053 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.490062 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbl4z\" (UniqueName: \"kubernetes.io/projected/bed537c3-9533-4953-a2bf-0df889e8ed81-kube-api-access-tbl4z\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.490160 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bed537c3-9533-4953-a2bf-0df889e8ed81-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.490196 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bed537c3-9533-4953-a2bf-0df889e8ed81-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.502178 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l"] Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.506883 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf995b9fc-bjw7l"] Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.786645 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b97f6d677-6pqgz"] Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.786952 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.786972 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.786986 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="extract-content" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.786993 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="extract-content" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787006 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787014 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787029 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerName="extract-content" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787036 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerName="extract-content" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787047 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerName="extract-utilities" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787056 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerName="extract-utilities" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787066 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="extract-utilities" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787074 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="extract-utilities" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787086 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed537c3-9533-4953-a2bf-0df889e8ed81" containerName="controller-manager" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787093 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed537c3-9533-4953-a2bf-0df889e8ed81" containerName="controller-manager" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787102 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="extract-utilities" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787111 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="extract-utilities" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787121 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787130 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787142 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622c4d5e-b31c-4462-a13f-367bc798f499" containerName="route-controller-manager" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787151 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="622c4d5e-b31c-4462-a13f-367bc798f499" containerName="route-controller-manager" Feb 27 19:39:06 crc kubenswrapper[4941]: E0227 19:39:06.787162 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="extract-content" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787172 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="extract-content" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787293 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="622c4d5e-b31c-4462-a13f-367bc798f499" containerName="route-controller-manager" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787306 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed537c3-9533-4953-a2bf-0df889e8ed81" containerName="controller-manager" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787322 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbfd7a3-e234-4afe-aaa4-f89d486c7164" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787333 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c166e9-0f02-4929-b623-404b062973fc" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787341 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ac2de3-f395-4fe8-90f4-a7fa58792f5a" containerName="registry-server" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.787824 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.789079 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw"] Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.789936 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.793390 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.794056 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.794381 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.794539 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.794733 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.795386 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.892347 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw"] Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896098 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66f4db0b-899e-4829-88d7-19287d06a12d-serving-cert\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896145 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-config\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896180 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-client-ca\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896198 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-proxy-ca-bundles\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896230 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rzvt\" (UniqueName: \"kubernetes.io/projected/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-kube-api-access-2rzvt\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896249 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm925\" (UniqueName: \"kubernetes.io/projected/66f4db0b-899e-4829-88d7-19287d06a12d-kube-api-access-pm925\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896268 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-client-ca\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896296 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-config\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.896317 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-serving-cert\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.911360 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b97f6d677-6pqgz"] Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997428 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rzvt\" (UniqueName: \"kubernetes.io/projected/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-kube-api-access-2rzvt\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997483 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm925\" (UniqueName: \"kubernetes.io/projected/66f4db0b-899e-4829-88d7-19287d06a12d-kube-api-access-pm925\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997510 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-client-ca\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997544 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-config\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997568 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-serving-cert\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997603 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66f4db0b-899e-4829-88d7-19287d06a12d-serving-cert\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997626 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-config\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997654 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-client-ca\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.997675 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-proxy-ca-bundles\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.998666 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-client-ca\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:06 crc kubenswrapper[4941]: I0227 19:39:06.999077 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-proxy-ca-bundles\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.000692 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-config\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.001068 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-client-ca\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.002753 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-config\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.003854 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66f4db0b-899e-4829-88d7-19287d06a12d-serving-cert\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.006728 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-serving-cert\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.014278 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm925\" (UniqueName: \"kubernetes.io/projected/66f4db0b-899e-4829-88d7-19287d06a12d-kube-api-access-pm925\") pod \"route-controller-manager-c5c9887b9-nx7hw\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.016061 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rzvt\" (UniqueName: \"kubernetes.io/projected/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-kube-api-access-2rzvt\") pod \"controller-manager-5b97f6d677-6pqgz\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.197536 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.202140 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" event={"ID":"bed537c3-9533-4953-a2bf-0df889e8ed81","Type":"ContainerDied","Data":"4696013348a1738d5439fcb88aa9fe7950acf41800ba2dd83edb75bb8034bece"} Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.202199 4941 scope.go:117] "RemoveContainer" containerID="1c94b225921c519cf2deadb15fa03b9481d5061ac53e6c63317d9b3124b9d92a" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.202209 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.211803 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.227134 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6"] Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.233419 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7dfb7b7695-2cfx6"] Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.631118 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b97f6d677-6pqgz"] Feb 27 19:39:07 crc kubenswrapper[4941]: I0227 19:39:07.684990 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw"] Feb 27 19:39:08 crc kubenswrapper[4941]: I0227 19:39:08.217276 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" event={"ID":"66f4db0b-899e-4829-88d7-19287d06a12d","Type":"ContainerStarted","Data":"cbdc62a8176d4aa0adee52c2a9745ddcb97985e478a1c192e59ca1a2fac51b09"} Feb 27 19:39:08 crc kubenswrapper[4941]: I0227 19:39:08.217322 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" event={"ID":"66f4db0b-899e-4829-88d7-19287d06a12d","Type":"ContainerStarted","Data":"d8e6256c86fed55351a4f8f675559d8bffee6bc29b2b58d9457ccb91daed6002"} Feb 27 19:39:08 crc kubenswrapper[4941]: I0227 19:39:08.218617 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" event={"ID":"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750","Type":"ContainerStarted","Data":"5008899ca27c50b5493e4a440185c2fdc1e10fd9161fb6d0433ad46e93b324fd"} Feb 27 19:39:08 crc kubenswrapper[4941]: I0227 19:39:08.221215 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48bk" event={"ID":"91aa0e95-3a50-4027-abeb-b8bd2abbcea5","Type":"ContainerStarted","Data":"75a32d4012f15487398d3d803b3055e4fe2ce6b4d4576a8e92159c3f6b47ea81"} Feb 27 19:39:08 crc kubenswrapper[4941]: I0227 19:39:08.252324 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n48bk" podStartSLOduration=3.609656092 podStartE2EDuration="1m3.252301201s" podCreationTimestamp="2026-02-27 19:38:05 +0000 UTC" firstStartedPulling="2026-02-27 19:38:07.301720879 +0000 UTC m=+205.562861299" lastFinishedPulling="2026-02-27 19:39:06.944365988 +0000 UTC m=+265.205506408" observedRunningTime="2026-02-27 19:39:08.246552549 +0000 UTC m=+266.507692979" watchObservedRunningTime="2026-02-27 19:39:08.252301201 +0000 UTC m=+266.513441621" Feb 27 19:39:08 crc kubenswrapper[4941]: E0227 19:39:08.468670 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:39:08 crc kubenswrapper[4941]: I0227 19:39:08.475341 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622c4d5e-b31c-4462-a13f-367bc798f499" path="/var/lib/kubelet/pods/622c4d5e-b31c-4462-a13f-367bc798f499/volumes" Feb 27 19:39:08 crc kubenswrapper[4941]: I0227 19:39:08.476173 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed537c3-9533-4953-a2bf-0df889e8ed81" path="/var/lib/kubelet/pods/bed537c3-9533-4953-a2bf-0df889e8ed81/volumes" Feb 27 19:39:09 crc kubenswrapper[4941]: I0227 19:39:09.233983 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" event={"ID":"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750","Type":"ContainerStarted","Data":"46483921fbc213866b751c2ca537c1214c837173b29fa8f677dae1c9c6accaf0"} Feb 27 19:39:09 crc kubenswrapper[4941]: I0227 19:39:09.234493 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:09 crc kubenswrapper[4941]: I0227 19:39:09.234541 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:09 crc kubenswrapper[4941]: I0227 19:39:09.240813 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:09 crc kubenswrapper[4941]: I0227 19:39:09.241388 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:09 crc kubenswrapper[4941]: I0227 19:39:09.262653 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" podStartSLOduration=5.262621615 podStartE2EDuration="5.262621615s" podCreationTimestamp="2026-02-27 19:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:39:09.259913894 +0000 UTC m=+267.521054314" watchObservedRunningTime="2026-02-27 19:39:09.262621615 +0000 UTC m=+267.523762025" Feb 27 19:39:09 crc kubenswrapper[4941]: I0227 19:39:09.285918 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" podStartSLOduration=5.285878351 podStartE2EDuration="5.285878351s" podCreationTimestamp="2026-02-27 19:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:39:09.278909012 +0000 UTC m=+267.540049432" watchObservedRunningTime="2026-02-27 19:39:09.285878351 +0000 UTC m=+267.547018771" Feb 27 19:39:15 crc kubenswrapper[4941]: I0227 19:39:15.469974 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:39:15 crc kubenswrapper[4941]: E0227 19:39:15.470014 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:39:15 crc kubenswrapper[4941]: E0227 19:39:15.470736 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:39:15 crc kubenswrapper[4941]: I0227 19:39:15.470793 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:39:15 crc kubenswrapper[4941]: I0227 19:39:15.520159 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:39:15 crc kubenswrapper[4941]: I0227 19:39:15.781087 4941 ???:1] "http: TLS handshake error from 192.168.126.11:34116: no serving certificate available for the kubelet" Feb 27 19:39:16 crc kubenswrapper[4941]: I0227 19:39:16.321956 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:39:17 crc kubenswrapper[4941]: I0227 19:39:17.315252 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b9bp4"] Feb 27 19:39:20 crc kubenswrapper[4941]: E0227 19:39:20.470637 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:39:20 crc kubenswrapper[4941]: E0227 19:39:20.470883 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:39:24 crc kubenswrapper[4941]: I0227 19:39:24.911867 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b97f6d677-6pqgz"] Feb 27 19:39:24 crc kubenswrapper[4941]: I0227 19:39:24.912874 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" podUID="842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" containerName="controller-manager" containerID="cri-o://46483921fbc213866b751c2ca537c1214c837173b29fa8f677dae1c9c6accaf0" gracePeriod=30 Feb 27 19:39:24 crc kubenswrapper[4941]: I0227 19:39:24.991969 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw"] Feb 27 19:39:24 crc kubenswrapper[4941]: I0227 19:39:24.992217 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" podUID="66f4db0b-899e-4829-88d7-19287d06a12d" containerName="route-controller-manager" containerID="cri-o://cbdc62a8176d4aa0adee52c2a9745ddcb97985e478a1c192e59ca1a2fac51b09" gracePeriod=30 Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.332280 4941 generic.go:334] "Generic (PLEG): container finished" podID="66f4db0b-899e-4829-88d7-19287d06a12d" containerID="cbdc62a8176d4aa0adee52c2a9745ddcb97985e478a1c192e59ca1a2fac51b09" exitCode=0 Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.332524 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" event={"ID":"66f4db0b-899e-4829-88d7-19287d06a12d","Type":"ContainerDied","Data":"cbdc62a8176d4aa0adee52c2a9745ddcb97985e478a1c192e59ca1a2fac51b09"} Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.334084 4941 generic.go:334] "Generic (PLEG): container finished" podID="842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" containerID="46483921fbc213866b751c2ca537c1214c837173b29fa8f677dae1c9c6accaf0" exitCode=0 Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.334120 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" event={"ID":"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750","Type":"ContainerDied","Data":"46483921fbc213866b751c2ca537c1214c837173b29fa8f677dae1c9c6accaf0"} Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.515622 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.556082 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.573902 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-config\") pod \"66f4db0b-899e-4829-88d7-19287d06a12d\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.574067 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66f4db0b-899e-4829-88d7-19287d06a12d-serving-cert\") pod \"66f4db0b-899e-4829-88d7-19287d06a12d\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.574127 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm925\" (UniqueName: \"kubernetes.io/projected/66f4db0b-899e-4829-88d7-19287d06a12d-kube-api-access-pm925\") pod \"66f4db0b-899e-4829-88d7-19287d06a12d\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.574154 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-client-ca\") pod \"66f4db0b-899e-4829-88d7-19287d06a12d\" (UID: \"66f4db0b-899e-4829-88d7-19287d06a12d\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.575269 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-client-ca" (OuterVolumeSpecName: "client-ca") pod "66f4db0b-899e-4829-88d7-19287d06a12d" (UID: "66f4db0b-899e-4829-88d7-19287d06a12d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.575367 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-config" (OuterVolumeSpecName: "config") pod "66f4db0b-899e-4829-88d7-19287d06a12d" (UID: "66f4db0b-899e-4829-88d7-19287d06a12d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.582616 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f4db0b-899e-4829-88d7-19287d06a12d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "66f4db0b-899e-4829-88d7-19287d06a12d" (UID: "66f4db0b-899e-4829-88d7-19287d06a12d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.583021 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f4db0b-899e-4829-88d7-19287d06a12d-kube-api-access-pm925" (OuterVolumeSpecName: "kube-api-access-pm925") pod "66f4db0b-899e-4829-88d7-19287d06a12d" (UID: "66f4db0b-899e-4829-88d7-19287d06a12d"). InnerVolumeSpecName "kube-api-access-pm925". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.674769 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-client-ca\") pod \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.674862 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-serving-cert\") pod \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.674889 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-config\") pod \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.674914 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-proxy-ca-bundles\") pod \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.674945 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rzvt\" (UniqueName: \"kubernetes.io/projected/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-kube-api-access-2rzvt\") pod \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\" (UID: \"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750\") " Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.675165 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66f4db0b-899e-4829-88d7-19287d06a12d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.675177 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm925\" (UniqueName: \"kubernetes.io/projected/66f4db0b-899e-4829-88d7-19287d06a12d-kube-api-access-pm925\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.675185 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.675193 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66f4db0b-899e-4829-88d7-19287d06a12d-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.675924 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-client-ca" (OuterVolumeSpecName: "client-ca") pod "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" (UID: "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.676120 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" (UID: "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.677054 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-config" (OuterVolumeSpecName: "config") pod "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" (UID: "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.678083 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-kube-api-access-2rzvt" (OuterVolumeSpecName: "kube-api-access-2rzvt") pod "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" (UID: "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750"). InnerVolumeSpecName "kube-api-access-2rzvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.679292 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" (UID: "842b3c1b-c3b7-435e-9fb5-38f2bf6ea750"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.776697 4941 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.776745 4941 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.776758 4941 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.776774 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rzvt\" (UniqueName: \"kubernetes.io/projected/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-kube-api-access-2rzvt\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:25 crc kubenswrapper[4941]: I0227 19:39:25.776785 4941 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.341453 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" event={"ID":"66f4db0b-899e-4829-88d7-19287d06a12d","Type":"ContainerDied","Data":"d8e6256c86fed55351a4f8f675559d8bffee6bc29b2b58d9457ccb91daed6002"} Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.341541 4941 scope.go:117] "RemoveContainer" containerID="cbdc62a8176d4aa0adee52c2a9745ddcb97985e478a1c192e59ca1a2fac51b09" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.342765 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.343829 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" event={"ID":"842b3c1b-c3b7-435e-9fb5-38f2bf6ea750","Type":"ContainerDied","Data":"5008899ca27c50b5493e4a440185c2fdc1e10fd9161fb6d0433ad46e93b324fd"} Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.343904 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b97f6d677-6pqgz" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.368716 4941 scope.go:117] "RemoveContainer" containerID="46483921fbc213866b751c2ca537c1214c837173b29fa8f677dae1c9c6accaf0" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.376107 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b97f6d677-6pqgz"] Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.385194 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b97f6d677-6pqgz"] Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.404843 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw"] Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.408882 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5c9887b9-nx7hw"] Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.473819 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f4db0b-899e-4829-88d7-19287d06a12d" path="/var/lib/kubelet/pods/66f4db0b-899e-4829-88d7-19287d06a12d/volumes" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.474769 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" path="/var/lib/kubelet/pods/842b3c1b-c3b7-435e-9fb5-38f2bf6ea750/volumes" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.799799 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct"] Feb 27 19:39:26 crc kubenswrapper[4941]: E0227 19:39:26.800193 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f4db0b-899e-4829-88d7-19287d06a12d" containerName="route-controller-manager" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.800220 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f4db0b-899e-4829-88d7-19287d06a12d" containerName="route-controller-manager" Feb 27 19:39:26 crc kubenswrapper[4941]: E0227 19:39:26.800260 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" containerName="controller-manager" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.800277 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" containerName="controller-manager" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.800575 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f4db0b-899e-4829-88d7-19287d06a12d" containerName="route-controller-manager" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.800604 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="842b3c1b-c3b7-435e-9fb5-38f2bf6ea750" containerName="controller-manager" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.801314 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.805996 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.806230 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.806442 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.806637 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.806899 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.807198 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.808328 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-99bf458b-6dm2m"] Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.809184 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.812735 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.813308 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.813591 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.814144 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.815290 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.819565 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.824521 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct"] Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.825902 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.828200 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99bf458b-6dm2m"] Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.893855 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c0a435-9d9a-488b-b8d4-8a70288941b1-serving-cert\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.893936 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-client-ca\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.893974 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757rp\" (UniqueName: \"kubernetes.io/projected/f3c0a435-9d9a-488b-b8d4-8a70288941b1-kube-api-access-757rp\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.894048 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c0a435-9d9a-488b-b8d4-8a70288941b1-config\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.894078 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2376b081-ca19-44fd-9fb2-b32b00629787-serving-cert\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.894103 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-config\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.894124 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3c0a435-9d9a-488b-b8d4-8a70288941b1-client-ca\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.894151 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6lpm\" (UniqueName: \"kubernetes.io/projected/2376b081-ca19-44fd-9fb2-b32b00629787-kube-api-access-s6lpm\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.894345 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-proxy-ca-bundles\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996092 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c0a435-9d9a-488b-b8d4-8a70288941b1-config\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996168 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2376b081-ca19-44fd-9fb2-b32b00629787-serving-cert\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996205 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-config\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996236 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3c0a435-9d9a-488b-b8d4-8a70288941b1-client-ca\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996262 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6lpm\" (UniqueName: \"kubernetes.io/projected/2376b081-ca19-44fd-9fb2-b32b00629787-kube-api-access-s6lpm\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996294 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-proxy-ca-bundles\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996331 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c0a435-9d9a-488b-b8d4-8a70288941b1-serving-cert\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996367 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-client-ca\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.996399 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-757rp\" (UniqueName: \"kubernetes.io/projected/f3c0a435-9d9a-488b-b8d4-8a70288941b1-kube-api-access-757rp\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.997507 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c0a435-9d9a-488b-b8d4-8a70288941b1-config\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.997950 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-config\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.998297 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3c0a435-9d9a-488b-b8d4-8a70288941b1-client-ca\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.998683 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-proxy-ca-bundles\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:26 crc kubenswrapper[4941]: I0227 19:39:26.999249 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2376b081-ca19-44fd-9fb2-b32b00629787-client-ca\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.001311 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c0a435-9d9a-488b-b8d4-8a70288941b1-serving-cert\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.002347 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2376b081-ca19-44fd-9fb2-b32b00629787-serving-cert\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.013146 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-757rp\" (UniqueName: \"kubernetes.io/projected/f3c0a435-9d9a-488b-b8d4-8a70288941b1-kube-api-access-757rp\") pod \"route-controller-manager-5975749d64-hdnct\" (UID: \"f3c0a435-9d9a-488b-b8d4-8a70288941b1\") " pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.025747 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6lpm\" (UniqueName: \"kubernetes.io/projected/2376b081-ca19-44fd-9fb2-b32b00629787-kube-api-access-s6lpm\") pod \"controller-manager-99bf458b-6dm2m\" (UID: \"2376b081-ca19-44fd-9fb2-b32b00629787\") " pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.126617 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.137829 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.424921 4941 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.425689 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.426974 4941 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.427656 4941 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.427747 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf" gracePeriod=15 Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.427796 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87" gracePeriod=15 Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.427835 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93" gracePeriod=15 Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.427737 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9" gracePeriod=15 Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.427940 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242" gracePeriod=15 Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.427872 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428162 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428182 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428191 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428215 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428223 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428234 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428241 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428277 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428288 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428302 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428310 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428330 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428339 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428350 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428357 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428366 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428374 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428589 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428600 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428616 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428625 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428636 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428646 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428654 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428663 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.428781 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.428792 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.429145 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.468359 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.529511 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.529594 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.529792 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.530068 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.530151 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.530187 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.530250 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.530310 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631202 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631250 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631276 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631377 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631384 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631338 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631420 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631401 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631466 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631511 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631515 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631533 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631555 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631581 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631770 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: I0227 19:39:27.631870 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.849953 4941 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 19:39:27 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f" Netns:"/var/run/netns/14808b1f-a582-402d-b60e-130ee4c0d6e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:27 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:27 crc kubenswrapper[4941]: > Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.850042 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 19:39:27 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f" Netns:"/var/run/netns/14808b1f-a582-402d-b60e-130ee4c0d6e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:27 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:27 crc kubenswrapper[4941]: > pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.850062 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 19:39:27 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f" Netns:"/var/run/netns/14808b1f-a582-402d-b60e-130ee4c0d6e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:27 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:27 crc kubenswrapper[4941]: > pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.850119 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager(f3c0a435-9d9a-488b-b8d4-8a70288941b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager(f3c0a435-9d9a-488b-b8d4-8a70288941b1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f\\\" Netns:\\\"/var/run/netns/14808b1f-a582-402d-b60e-130ee4c0d6e8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=dc96470c84474a914e319235d399c4a60320a5a63ef3dab26900d8b3204cfd3f;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s\\\": dial tcp 38.102.83.75:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" podUID="f3c0a435-9d9a-488b-b8d4-8a70288941b1" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.905604 4941 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 19:39:27 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a" Netns:"/var/run/netns/f86fe51c-dc28-4441-8d82-bfb765bdacf5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:27 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:27 crc kubenswrapper[4941]: > Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.905672 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 19:39:27 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a" Netns:"/var/run/netns/f86fe51c-dc28-4441-8d82-bfb765bdacf5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:27 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:27 crc kubenswrapper[4941]: > pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.905694 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 19:39:27 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a" Netns:"/var/run/netns/f86fe51c-dc28-4441-8d82-bfb765bdacf5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:27 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:27 crc kubenswrapper[4941]: > pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:27 crc kubenswrapper[4941]: E0227 19:39:27.905749 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-99bf458b-6dm2m_openshift-controller-manager(2376b081-ca19-44fd-9fb2-b32b00629787)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-99bf458b-6dm2m_openshift-controller-manager(2376b081-ca19-44fd-9fb2-b32b00629787)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a\\\" Netns:\\\"/var/run/netns/f86fe51c-dc28-4441-8d82-bfb765bdacf5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=2471beb2ffdf6d9c6399b5e02e92a55f423356950595892b7ff7f81241a3532a;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s\\\": dial tcp 38.102.83.75:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" podUID="2376b081-ca19-44fd-9fb2-b32b00629787" Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.359504 4941 generic.go:334] "Generic (PLEG): container finished" podID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" containerID="9c11321b0c4e2d766b4d506b4c70643a7f52b439bedbd5100dc50c1a6c89f4fc" exitCode=0 Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.359580 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e21ea716-ea1a-46b4-93be-5be9ca44cccb","Type":"ContainerDied","Data":"9c11321b0c4e2d766b4d506b4c70643a7f52b439bedbd5100dc50c1a6c89f4fc"} Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.362173 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.363298 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.364524 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9" exitCode=0 Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.364554 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87" exitCode=0 Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.364563 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93" exitCode=0 Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.364569 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242" exitCode=2 Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.364597 4941 scope.go:117] "RemoveContainer" containerID="b545c31352c3841c154ddc3c7dd19f7b9ad6aace6664baabd2e959f1bd35569e" Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.364632 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.364790 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.365105 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:28 crc kubenswrapper[4941]: I0227 19:39:28.365279 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:28 crc kubenswrapper[4941]: E0227 19:39:28.494097 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:39:29 crc kubenswrapper[4941]: E0227 19:39:29.035707 4941 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 19:39:29 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0" Netns:"/var/run/netns/3828e2e4-6875-4cdc-99eb-c1b32bbcd901" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:29 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:29 crc kubenswrapper[4941]: > Feb 27 19:39:29 crc kubenswrapper[4941]: E0227 19:39:29.035779 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 19:39:29 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0" Netns:"/var/run/netns/3828e2e4-6875-4cdc-99eb-c1b32bbcd901" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:29 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:29 crc kubenswrapper[4941]: > pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:29 crc kubenswrapper[4941]: E0227 19:39:29.035802 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 19:39:29 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0" Netns:"/var/run/netns/3828e2e4-6875-4cdc-99eb-c1b32bbcd901" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:29 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:29 crc kubenswrapper[4941]: > pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:29 crc kubenswrapper[4941]: E0227 19:39:29.035868 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager(f3c0a435-9d9a-488b-b8d4-8a70288941b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager(f3c0a435-9d9a-488b-b8d4-8a70288941b1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0\\\" Netns:\\\"/var/run/netns/3828e2e4-6875-4cdc-99eb-c1b32bbcd901\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=4cd1457cac189b6f979e6f0ba097800abb8e2cd7fd3ea561644d57cbf670acf0;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s\\\": dial tcp 38.102.83.75:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" podUID="f3c0a435-9d9a-488b-b8d4-8a70288941b1" Feb 27 19:39:29 crc kubenswrapper[4941]: E0227 19:39:29.129103 4941 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 19:39:29 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666" Netns:"/var/run/netns/32d32149-7ece-4d0d-b499-7154da78c2c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:29 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:29 crc kubenswrapper[4941]: > Feb 27 19:39:29 crc kubenswrapper[4941]: E0227 19:39:29.129178 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 19:39:29 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666" Netns:"/var/run/netns/32d32149-7ece-4d0d-b499-7154da78c2c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:29 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:29 crc kubenswrapper[4941]: > pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:29 crc kubenswrapper[4941]: E0227 19:39:29.129197 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 19:39:29 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666" Netns:"/var/run/netns/32d32149-7ece-4d0d-b499-7154da78c2c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:29 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:29 crc kubenswrapper[4941]: > pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:29 crc kubenswrapper[4941]: E0227 19:39:29.129257 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-99bf458b-6dm2m_openshift-controller-manager(2376b081-ca19-44fd-9fb2-b32b00629787)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-99bf458b-6dm2m_openshift-controller-manager(2376b081-ca19-44fd-9fb2-b32b00629787)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666\\\" Netns:\\\"/var/run/netns/32d32149-7ece-4d0d-b499-7154da78c2c5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=a45b536baf8bedd93203ca464f311c50410e64abaa0dbab8aed88d8496cce666;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s\\\": dial tcp 38.102.83.75:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" podUID="2376b081-ca19-44fd-9fb2-b32b00629787" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.372904 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.781204 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.787226 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.788584 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.851550 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.851609 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.851657 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.852149 4941 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817"} pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.852203 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" containerID="cri-o://ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817" gracePeriod=600 Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864034 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864092 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864144 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864152 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864174 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kube-api-access\") pod \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864197 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864214 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864218 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kubelet-dir\") pod \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864244 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-var-lock\") pod \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\" (UID: \"e21ea716-ea1a-46b4-93be-5be9ca44cccb\") " Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864440 4941 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864452 4941 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864459 4941 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864510 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-var-lock" (OuterVolumeSpecName: "var-lock") pod "e21ea716-ea1a-46b4-93be-5be9ca44cccb" (UID: "e21ea716-ea1a-46b4-93be-5be9ca44cccb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.864541 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e21ea716-ea1a-46b4-93be-5be9ca44cccb" (UID: "e21ea716-ea1a-46b4-93be-5be9ca44cccb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.870175 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e21ea716-ea1a-46b4-93be-5be9ca44cccb" (UID: "e21ea716-ea1a-46b4-93be-5be9ca44cccb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.965661 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.965691 4941 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:29 crc kubenswrapper[4941]: I0227 19:39:29.965699 4941 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e21ea716-ea1a-46b4-93be-5be9ca44cccb-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.379831 4941 generic.go:334] "Generic (PLEG): container finished" podID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerID="ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817" exitCode=0 Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.379917 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerDied","Data":"ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817"} Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.380238 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"9c85a96bc4775f82cb1875c85131dd1a64f63208a52adc34dd383c749b265921"} Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.381635 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.381669 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e21ea716-ea1a-46b4-93be-5be9ca44cccb","Type":"ContainerDied","Data":"5c913dfca8f8aaad5a2c8de5bfd0bfaeec9204facb1f19f95677526a9655e7ee"} Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.381704 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c913dfca8f8aaad5a2c8de5bfd0bfaeec9204facb1f19f95677526a9655e7ee" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.384430 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.385383 4941 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf" exitCode=0 Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.385440 4941 scope.go:117] "RemoveContainer" containerID="425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.385505 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.399401 4941 scope.go:117] "RemoveContainer" containerID="d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.411724 4941 scope.go:117] "RemoveContainer" containerID="453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.426418 4941 scope.go:117] "RemoveContainer" containerID="4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.440191 4941 scope.go:117] "RemoveContainer" containerID="96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.454598 4941 scope.go:117] "RemoveContainer" containerID="ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.473667 4941 scope.go:117] "RemoveContainer" containerID="425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9" Feb 27 19:39:30 crc kubenswrapper[4941]: E0227 19:39:30.474268 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\": container with ID starting with 425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9 not found: ID does not exist" containerID="425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.474303 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9"} err="failed to get container status \"425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\": rpc error: code = NotFound desc = could not find container \"425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9\": container with ID starting with 425a1207295436cda3f72a19ee38ba87c3763c44124f5edd6f97feac91cfccd9 not found: ID does not exist" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.474328 4941 scope.go:117] "RemoveContainer" containerID="d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87" Feb 27 19:39:30 crc kubenswrapper[4941]: E0227 19:39:30.474794 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\": container with ID starting with d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87 not found: ID does not exist" containerID="d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.474873 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87"} err="failed to get container status \"d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\": rpc error: code = NotFound desc = could not find container \"d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87\": container with ID starting with d7e6debba1ede5520b53170f0f8a188e15f113a9c641703566648ec9d1e8dd87 not found: ID does not exist" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.474919 4941 scope.go:117] "RemoveContainer" containerID="453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93" Feb 27 19:39:30 crc kubenswrapper[4941]: E0227 19:39:30.475435 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\": container with ID starting with 453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93 not found: ID does not exist" containerID="453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.475508 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93"} err="failed to get container status \"453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\": rpc error: code = NotFound desc = could not find container \"453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93\": container with ID starting with 453d835d3d8f2c071a21772eac5321254367a5bb2a0c454f0d6f74650b773a93 not found: ID does not exist" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.475543 4941 scope.go:117] "RemoveContainer" containerID="4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242" Feb 27 19:39:30 crc kubenswrapper[4941]: E0227 19:39:30.475882 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\": container with ID starting with 4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242 not found: ID does not exist" containerID="4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.475906 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242"} err="failed to get container status \"4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\": rpc error: code = NotFound desc = could not find container \"4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242\": container with ID starting with 4b45272d36755997cbc22ebd0fab48f7fdb7efd2cdefcea5eea86355635a3242 not found: ID does not exist" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.475927 4941 scope.go:117] "RemoveContainer" containerID="96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf" Feb 27 19:39:30 crc kubenswrapper[4941]: E0227 19:39:30.476265 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\": container with ID starting with 96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf not found: ID does not exist" containerID="96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.476300 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf"} err="failed to get container status \"96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\": rpc error: code = NotFound desc = could not find container \"96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf\": container with ID starting with 96ccb60e3140a42ffc8b6f1f4900086b0814f3c458ed784e252d0ac92ebcbbbf not found: ID does not exist" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.476339 4941 scope.go:117] "RemoveContainer" containerID="ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114" Feb 27 19:39:30 crc kubenswrapper[4941]: E0227 19:39:30.476709 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\": container with ID starting with ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114 not found: ID does not exist" containerID="ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.476760 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114"} err="failed to get container status \"ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\": rpc error: code = NotFound desc = could not find container \"ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114\": container with ID starting with ce6acf37cd723c0d321a899f7ef726575daead75252612c9827c0a2e4ca09114 not found: ID does not exist" Feb 27 19:39:30 crc kubenswrapper[4941]: I0227 19:39:30.477810 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 27 19:39:31 crc kubenswrapper[4941]: E0227 19:39:31.468180 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:39:32 crc kubenswrapper[4941]: E0227 19:39:32.461957 4941 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.75:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.462366 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.471338 4941 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: E0227 19:39:32.471561 4941 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/events/auto-csr-approver-29537018-k4vjk.189831afb2d98f35\": dial tcp 38.102.83.75:6443: connect: connection refused" event="&Event{ObjectMeta:{auto-csr-approver-29537018-k4vjk.189831afb2d98f35 openshift-infra 29760 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-infra,Name:auto-csr-approver-29537018-k4vjk,UID:0d98f658-1f8e-41f5-bc4e-2f442243e453,APIVersion:v1,ResourceVersion:27542,FieldPath:spec.containers{oc},},Reason:BackOff,Message:Back-off pulling image \"registry.redhat.io/openshift4/ose-cli:latest\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:38:04 +0000 UTC,LastTimestamp:2026-02-27 19:39:27.468324514 +0000 UTC m=+285.729464934,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.471777 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.472125 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.472490 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.473554 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.473731 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.474054 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.476914 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.477559 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.478093 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.478859 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:32 crc kubenswrapper[4941]: I0227 19:39:32.479027 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:33 crc kubenswrapper[4941]: I0227 19:39:33.405648 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a14050cc3ad575814029d2b905bfcc87d44b751e722de0f4ca805e7588a3ffc9"} Feb 27 19:39:33 crc kubenswrapper[4941]: I0227 19:39:33.406352 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"28f77075cd3a419531d2c7fd19698cbe0653aeea998d560c3dceaa9808b10160"} Feb 27 19:39:33 crc kubenswrapper[4941]: I0227 19:39:33.407129 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:33 crc kubenswrapper[4941]: E0227 19:39:33.407572 4941 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.75:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:39:33 crc kubenswrapper[4941]: I0227 19:39:33.407622 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:33 crc kubenswrapper[4941]: I0227 19:39:33.408015 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:33 crc kubenswrapper[4941]: I0227 19:39:33.408264 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:33 crc kubenswrapper[4941]: I0227 19:39:33.408610 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:33 crc kubenswrapper[4941]: E0227 19:39:33.468490 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:39:36 crc kubenswrapper[4941]: E0227 19:39:36.503396 4941 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/events/auto-csr-approver-29537018-k4vjk.189831afb2d98f35\": dial tcp 38.102.83.75:6443: connect: connection refused" event="&Event{ObjectMeta:{auto-csr-approver-29537018-k4vjk.189831afb2d98f35 openshift-infra 29760 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-infra,Name:auto-csr-approver-29537018-k4vjk,UID:0d98f658-1f8e-41f5-bc4e-2f442243e453,APIVersion:v1,ResourceVersion:27542,FieldPath:spec.containers{oc},},Reason:BackOff,Message:Back-off pulling image \"registry.redhat.io/openshift4/ose-cli:latest\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 19:38:04 +0000 UTC,LastTimestamp:2026-02-27 19:39:27.468324514 +0000 UTC m=+285.729464934,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 19:39:36 crc kubenswrapper[4941]: E0227 19:39:36.652042 4941 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:36 crc kubenswrapper[4941]: E0227 19:39:36.652720 4941 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:36 crc kubenswrapper[4941]: E0227 19:39:36.653202 4941 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:36 crc kubenswrapper[4941]: E0227 19:39:36.653599 4941 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:36 crc kubenswrapper[4941]: E0227 19:39:36.653958 4941 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:36 crc kubenswrapper[4941]: I0227 19:39:36.653990 4941 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 27 19:39:36 crc kubenswrapper[4941]: E0227 19:39:36.654205 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="200ms" Feb 27 19:39:36 crc kubenswrapper[4941]: E0227 19:39:36.855653 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="400ms" Feb 27 19:39:37 crc kubenswrapper[4941]: E0227 19:39:37.256999 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="800ms" Feb 27 19:39:38 crc kubenswrapper[4941]: E0227 19:39:38.058314 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="1.6s" Feb 27 19:39:38 crc kubenswrapper[4941]: E0227 19:39:38.548372 4941 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.75:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" volumeName="registry-storage" Feb 27 19:39:39 crc kubenswrapper[4941]: E0227 19:39:39.659799 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="3.2s" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.469835 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 19:39:40 crc kubenswrapper[4941]: E0227 19:39:40.470341 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:39:40 crc kubenswrapper[4941]: E0227 19:39:40.470642 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.471528 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.471592 4941 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc" exitCode=1 Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.483657 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc"} Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.484645 4941 scope.go:117] "RemoveContainer" containerID="2c1f5d14cc95b72bcd0f6cb9008c103d77cf655963f9c3b1995737d4c70c87cc" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.485011 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.485524 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.486299 4941 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.487181 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.487979 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:40 crc kubenswrapper[4941]: I0227 19:39:40.488559 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.485281 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.486302 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.486388 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3b8d89b0bff94efa38ef3c47b30b4f38396c59eb83c5ec9bb5a74a6ac5ea646"} Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.488334 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.488958 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.489624 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.490588 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.490972 4941 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:41 crc kubenswrapper[4941]: I0227 19:39:41.491337 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.349416 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" containerName="oauth-openshift" containerID="cri-o://7ce3bae6f073409e2d8043eccf85bde055d6cd9d1856902d73db8c9d796a3b5f" gracePeriod=15 Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.466100 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.466110 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.470416 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.470730 4941 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.470957 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.471158 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.471257 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: E0227 19:39:42.473560 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.474759 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.476248 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.476978 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.477656 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.477956 4941 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.478214 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.478541 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.479176 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.496819 4941 generic.go:334] "Generic (PLEG): container finished" podID="815c7d43-e40d-4519-80ce-13df0e8d63ff" containerID="7ce3bae6f073409e2d8043eccf85bde055d6cd9d1856902d73db8c9d796a3b5f" exitCode=0 Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.496906 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" event={"ID":"815c7d43-e40d-4519-80ce-13df0e8d63ff","Type":"ContainerDied","Data":"7ce3bae6f073409e2d8043eccf85bde055d6cd9d1856902d73db8c9d796a3b5f"} Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.535707 4941 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.535744 4941 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:42 crc kubenswrapper[4941]: E0227 19:39:42.536260 4941 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.536838 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.795904 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.797303 4941 status_manager.go:851] "Failed to get status for pod" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b9bp4\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.798016 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.798493 4941 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.798837 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.799228 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.800399 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.800683 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852360 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnbc8\" (UniqueName: \"kubernetes.io/projected/815c7d43-e40d-4519-80ce-13df0e8d63ff-kube-api-access-nnbc8\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852447 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-cliconfig\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852521 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-policies\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852569 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-error\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852603 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-session\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852634 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-serving-cert\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852668 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-idp-0-file-data\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852685 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-dir\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852729 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-ocp-branding-template\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852758 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-login\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852810 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-trusted-ca-bundle\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852836 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-service-ca\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852859 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-provider-selection\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.852894 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-router-certs\") pod \"815c7d43-e40d-4519-80ce-13df0e8d63ff\" (UID: \"815c7d43-e40d-4519-80ce-13df0e8d63ff\") " Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.853459 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.853962 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.854071 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.854684 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.856714 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.859713 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.860287 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815c7d43-e40d-4519-80ce-13df0e8d63ff-kube-api-access-nnbc8" (OuterVolumeSpecName: "kube-api-access-nnbc8") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "kube-api-access-nnbc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.860353 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.860432 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.860644 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: E0227 19:39:42.860788 4941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.75:6443: connect: connection refused" interval="6.4s" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.861264 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.862332 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.862517 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.862857 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "815c7d43-e40d-4519-80ce-13df0e8d63ff" (UID: "815c7d43-e40d-4519-80ce-13df0e8d63ff"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954647 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954731 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnbc8\" (UniqueName: \"kubernetes.io/projected/815c7d43-e40d-4519-80ce-13df0e8d63ff-kube-api-access-nnbc8\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954745 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954755 4941 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954766 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954777 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954786 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954795 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954805 4941 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/815c7d43-e40d-4519-80ce-13df0e8d63ff-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954817 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954827 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954837 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954845 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:42 crc kubenswrapper[4941]: I0227 19:39:42.954857 4941 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/815c7d43-e40d-4519-80ce-13df0e8d63ff-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:43 crc kubenswrapper[4941]: E0227 19:39:43.169327 4941 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 19:39:43 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743" Netns:"/var/run/netns/2bc6d015-e34d-4348-9f17-d4c1ec6be5c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:43 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:43 crc kubenswrapper[4941]: > Feb 27 19:39:43 crc kubenswrapper[4941]: E0227 19:39:43.170127 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 19:39:43 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743" Netns:"/var/run/netns/2bc6d015-e34d-4348-9f17-d4c1ec6be5c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:43 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:43 crc kubenswrapper[4941]: > pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:43 crc kubenswrapper[4941]: E0227 19:39:43.170235 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 19:39:43 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743" Netns:"/var/run/netns/2bc6d015-e34d-4348-9f17-d4c1ec6be5c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:43 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:43 crc kubenswrapper[4941]: > pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:43 crc kubenswrapper[4941]: E0227 19:39:43.170691 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager(f3c0a435-9d9a-488b-b8d4-8a70288941b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager(f3c0a435-9d9a-488b-b8d4-8a70288941b1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5975749d64-hdnct_openshift-route-controller-manager_f3c0a435-9d9a-488b-b8d4-8a70288941b1_0(d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743): error adding pod openshift-route-controller-manager_route-controller-manager-5975749d64-hdnct to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743\\\" Netns:\\\"/var/run/netns/2bc6d015-e34d-4348-9f17-d4c1ec6be5c6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5975749d64-hdnct;K8S_POD_INFRA_CONTAINER_ID=d86b6c1f40ace4a0da15eff585a90b71a892c01239d9cc772f07b51000a94743;K8S_POD_UID=f3c0a435-9d9a-488b-b8d4-8a70288941b1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct/f3c0a435-9d9a-488b-b8d4-8a70288941b1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5975749d64-hdnct in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5975749d64-hdnct?timeout=1m0s\\\": dial tcp 38.102.83.75:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" podUID="f3c0a435-9d9a-488b-b8d4-8a70288941b1" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.467082 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.468047 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.507285 4941 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="866fcddf2768ed0d5fbf06a38c7dfc664260129c716c5b3be911155ea8795b02" exitCode=0 Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.507399 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"866fcddf2768ed0d5fbf06a38c7dfc664260129c716c5b3be911155ea8795b02"} Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.507498 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6e14656843ffa7f6006be78c0c30cfca8061259d9d5616274642e90453170b60"} Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.508085 4941 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.508111 4941 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:43 crc kubenswrapper[4941]: E0227 19:39:43.508750 4941 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.509341 4941 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.510810 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.511172 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.511530 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.511804 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" event={"ID":"815c7d43-e40d-4519-80ce-13df0e8d63ff","Type":"ContainerDied","Data":"6891b73d889729455a2d5db4f07521093ccf290dbfae6b5f892f8b4ba3c13839"} Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.511886 4941 scope.go:117] "RemoveContainer" containerID="7ce3bae6f073409e2d8043eccf85bde055d6cd9d1856902d73db8c9d796a3b5f" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.511916 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.512145 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.512545 4941 status_manager.go:851] "Failed to get status for pod" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b9bp4\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.512822 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.513192 4941 status_manager.go:851] "Failed to get status for pod" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b9bp4\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.513664 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.514054 4941 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.514265 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.514439 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.514689 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.514946 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.537923 4941 status_manager.go:851] "Failed to get status for pod" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" pod="openshift-marketplace/redhat-marketplace-kllrr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kllrr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.538903 4941 status_manager.go:851] "Failed to get status for pod" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" pod="openshift-marketplace/community-operators-r9t78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r9t78\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.539497 4941 status_manager.go:851] "Failed to get status for pod" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29537018-k4vjk\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.539892 4941 status_manager.go:851] "Failed to get status for pod" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" pod="openshift-authentication/oauth-openshift-558db77b4-b9bp4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b9bp4\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.540180 4941 status_manager.go:851] "Failed to get status for pod" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-hj7qr\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.540503 4941 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:43 crc kubenswrapper[4941]: I0227 19:39:43.540844 4941 status_manager.go:851] "Failed to get status for pod" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.75:6443: connect: connection refused" Feb 27 19:39:44 crc kubenswrapper[4941]: E0227 19:39:44.158602 4941 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 19:39:44 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7" Netns:"/var/run/netns/45bb814b-4391-4694-86df-97477f6a71a6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:44 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:44 crc kubenswrapper[4941]: > Feb 27 19:39:44 crc kubenswrapper[4941]: E0227 19:39:44.158922 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 19:39:44 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7" Netns:"/var/run/netns/45bb814b-4391-4694-86df-97477f6a71a6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:44 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:44 crc kubenswrapper[4941]: > pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:44 crc kubenswrapper[4941]: E0227 19:39:44.158948 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 19:39:44 crc kubenswrapper[4941]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7" Netns:"/var/run/netns/45bb814b-4391-4694-86df-97477f6a71a6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s": dial tcp 38.102.83.75:6443: connect: connection refused Feb 27 19:39:44 crc kubenswrapper[4941]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:39:44 crc kubenswrapper[4941]: > pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:44 crc kubenswrapper[4941]: E0227 19:39:44.159020 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-99bf458b-6dm2m_openshift-controller-manager(2376b081-ca19-44fd-9fb2-b32b00629787)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-99bf458b-6dm2m_openshift-controller-manager(2376b081-ca19-44fd-9fb2-b32b00629787)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-99bf458b-6dm2m_openshift-controller-manager_2376b081-ca19-44fd-9fb2-b32b00629787_0(88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7): error adding pod openshift-controller-manager_controller-manager-99bf458b-6dm2m to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7\\\" Netns:\\\"/var/run/netns/45bb814b-4391-4694-86df-97477f6a71a6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-99bf458b-6dm2m;K8S_POD_INFRA_CONTAINER_ID=88873d02c32ee88cd809b80fbb6d99e6cff29e76f02deea8e9890ba9455117c7;K8S_POD_UID=2376b081-ca19-44fd-9fb2-b32b00629787\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-99bf458b-6dm2m] networking: Multus: [openshift-controller-manager/controller-manager-99bf458b-6dm2m/2376b081-ca19-44fd-9fb2-b32b00629787]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-99bf458b-6dm2m in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-99bf458b-6dm2m?timeout=1m0s\\\": dial tcp 38.102.83.75:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" podUID="2376b081-ca19-44fd-9fb2-b32b00629787" Feb 27 19:39:44 crc kubenswrapper[4941]: I0227 19:39:44.543424 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a46c051cbcba960126a49a94b6882b5aa6bddc1522131f6a598a7f653cf95934"} Feb 27 19:39:44 crc kubenswrapper[4941]: I0227 19:39:44.543494 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8b60278c3c112aea56d1db96c2f4856c53b52e16bf6c8234112d6e4e4a9502ff"} Feb 27 19:39:44 crc kubenswrapper[4941]: I0227 19:39:44.543509 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"21c7b5eaa778ab0b24c6baf484251260d4b22ae74336f1dba62ff9cc0f0d76b4"} Feb 27 19:39:44 crc kubenswrapper[4941]: I0227 19:39:44.543521 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"314857d509393d3e54f2ca0cdf46e80a565dd9f186680a06fb7f5492fca64490"} Feb 27 19:39:44 crc kubenswrapper[4941]: I0227 19:39:44.610945 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:39:45 crc kubenswrapper[4941]: I0227 19:39:45.657454 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"25f3b485b3c57d72d5731aa7bb3ce9b9e1efaba61b18c8262fd46a67879dd845"} Feb 27 19:39:45 crc kubenswrapper[4941]: I0227 19:39:45.660137 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:45 crc kubenswrapper[4941]: I0227 19:39:45.660202 4941 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:45 crc kubenswrapper[4941]: I0227 19:39:45.660226 4941 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:46 crc kubenswrapper[4941]: E0227 19:39:46.093520 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:39:46 crc kubenswrapper[4941]: E0227 19:39:46.093674 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6xd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9kv24_openshift-marketplace(81b6db0c-c9b7-4f84-8ec4-c690e0c59788): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:46 crc kubenswrapper[4941]: E0227 19:39:46.094845 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:39:47 crc kubenswrapper[4941]: I0227 19:39:47.160810 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:39:47 crc kubenswrapper[4941]: I0227 19:39:47.160988 4941 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 27 19:39:47 crc kubenswrapper[4941]: I0227 19:39:47.161375 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 27 19:39:47 crc kubenswrapper[4941]: I0227 19:39:47.537047 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:47 crc kubenswrapper[4941]: I0227 19:39:47.537107 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:47 crc kubenswrapper[4941]: I0227 19:39:47.547461 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:50 crc kubenswrapper[4941]: I0227 19:39:50.835743 4941 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:51 crc kubenswrapper[4941]: I0227 19:39:51.697691 4941 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:51 crc kubenswrapper[4941]: I0227 19:39:51.698539 4941 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:51 crc kubenswrapper[4941]: I0227 19:39:51.699703 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:39:52 crc kubenswrapper[4941]: I0227 19:39:52.497491 4941 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aa1d79f0-f5ee-4e9a-b74b-56901c72cc28" Feb 27 19:39:52 crc kubenswrapper[4941]: I0227 19:39:52.701359 4941 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:52 crc kubenswrapper[4941]: I0227 19:39:52.701387 4941 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:39:52 crc kubenswrapper[4941]: I0227 19:39:52.708324 4941 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aa1d79f0-f5ee-4e9a-b74b-56901c72cc28" Feb 27 19:39:54 crc kubenswrapper[4941]: E0227 19:39:54.137775 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:39:54 crc kubenswrapper[4941]: E0227 19:39:54.138048 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xstmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r9t78_openshift-marketplace(5d3d1f1c-429f-4fd3-a28d-089c23afbbba): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:54 crc kubenswrapper[4941]: E0227 19:39:54.139366 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:39:54 crc kubenswrapper[4941]: I0227 19:39:54.466705 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:54 crc kubenswrapper[4941]: I0227 19:39:54.467120 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:54 crc kubenswrapper[4941]: W0227 19:39:54.908777 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c0a435_9d9a_488b_b8d4_8a70288941b1.slice/crio-73d07199948770a871e31d635fb41a0fa9e09056fbc795e93021b7b8981588ef WatchSource:0}: Error finding container 73d07199948770a871e31d635fb41a0fa9e09056fbc795e93021b7b8981588ef: Status 404 returned error can't find the container with id 73d07199948770a871e31d635fb41a0fa9e09056fbc795e93021b7b8981588ef Feb 27 19:39:55 crc kubenswrapper[4941]: I0227 19:39:55.466685 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:55 crc kubenswrapper[4941]: I0227 19:39:55.467256 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:55 crc kubenswrapper[4941]: I0227 19:39:55.729119 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" event={"ID":"f3c0a435-9d9a-488b-b8d4-8a70288941b1","Type":"ContainerStarted","Data":"7350559d610620bde399848a7d498f42d1eb6edda825de29095f5b87b3c2e216"} Feb 27 19:39:55 crc kubenswrapper[4941]: I0227 19:39:55.729175 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" event={"ID":"f3c0a435-9d9a-488b-b8d4-8a70288941b1","Type":"ContainerStarted","Data":"73d07199948770a871e31d635fb41a0fa9e09056fbc795e93021b7b8981588ef"} Feb 27 19:39:55 crc kubenswrapper[4941]: I0227 19:39:55.729436 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:39:56 crc kubenswrapper[4941]: E0227 19:39:56.469606 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:39:56 crc kubenswrapper[4941]: I0227 19:39:56.729989 4941 patch_prober.go:28] interesting pod/route-controller-manager-5975749d64-hdnct container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 19:39:56 crc kubenswrapper[4941]: I0227 19:39:56.730498 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" podUID="f3c0a435-9d9a-488b-b8d4-8a70288941b1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 19:39:56 crc kubenswrapper[4941]: I0227 19:39:56.738620 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" event={"ID":"2376b081-ca19-44fd-9fb2-b32b00629787","Type":"ContainerStarted","Data":"5598bb8bedddd2d08eb5b03ea77d900b9db80d2acf782284774146989e7c2083"} Feb 27 19:39:56 crc kubenswrapper[4941]: I0227 19:39:56.738693 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" event={"ID":"2376b081-ca19-44fd-9fb2-b32b00629787","Type":"ContainerStarted","Data":"18b05966c63b81bfd9a3f0cedf4b305134237aee22c3ed1a09f21ce4777e0c57"} Feb 27 19:39:56 crc kubenswrapper[4941]: I0227 19:39:56.739348 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:56 crc kubenswrapper[4941]: I0227 19:39:56.744710 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" Feb 27 19:39:57 crc kubenswrapper[4941]: I0227 19:39:57.161019 4941 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 27 19:39:57 crc kubenswrapper[4941]: I0227 19:39:57.161110 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 27 19:39:57 crc kubenswrapper[4941]: I0227 19:39:57.739014 4941 patch_prober.go:28] interesting pod/route-controller-manager-5975749d64-hdnct container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 19:39:57 crc kubenswrapper[4941]: I0227 19:39:57.739087 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" podUID="f3c0a435-9d9a-488b-b8d4-8a70288941b1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 19:39:58 crc kubenswrapper[4941]: E0227 19:39:58.174456 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:39:58 crc kubenswrapper[4941]: E0227 19:39:58.174644 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blwb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kllrr_openshift-marketplace(bd71dd28-494b-4f92-8cf2-f79b4709c6d5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:58 crc kubenswrapper[4941]: E0227 19:39:58.175866 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:39:58 crc kubenswrapper[4941]: I0227 19:39:58.739986 4941 patch_prober.go:28] interesting pod/route-controller-manager-5975749d64-hdnct container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 19:39:58 crc kubenswrapper[4941]: I0227 19:39:58.740341 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" podUID="f3c0a435-9d9a-488b-b8d4-8a70288941b1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 19:40:01 crc kubenswrapper[4941]: I0227 19:40:01.004087 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 19:40:01 crc kubenswrapper[4941]: I0227 19:40:01.340988 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 19:40:01 crc kubenswrapper[4941]: I0227 19:40:01.451659 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 19:40:01 crc kubenswrapper[4941]: I0227 19:40:01.828096 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 19:40:01 crc kubenswrapper[4941]: I0227 19:40:01.837142 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 19:40:01 crc kubenswrapper[4941]: I0227 19:40:01.854128 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 19:40:01 crc kubenswrapper[4941]: I0227 19:40:01.865233 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.183844 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.241652 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.241660 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.332233 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.400345 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.440913 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.465558 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.467816 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.522986 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.523149 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.835213 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 19:40:02 crc kubenswrapper[4941]: I0227 19:40:02.970684 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.030691 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.062031 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.144287 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.282422 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.340328 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.345963 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.474777 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.508823 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.636926 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 19:40:03 crc kubenswrapper[4941]: I0227 19:40:03.796691 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.087392 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.155800 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.156802 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.156933 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.222089 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.281915 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.293661 4941 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.301403 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.356797 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.467646 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.470343 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.578554 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.612559 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.656167 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.709808 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.773836 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.832162 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.837403 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.918346 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 19:40:04 crc kubenswrapper[4941]: I0227 19:40:04.986247 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.048951 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.129331 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.144745 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.160694 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.192576 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.203038 4941 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.232167 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.346024 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.358257 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.388378 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.468109 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.489010 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.513727 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.516638 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.602701 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.663345 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.812753 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.931720 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 19:40:05 crc kubenswrapper[4941]: I0227 19:40:05.943943 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.039990 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.063873 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.077718 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.095069 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.241496 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.350239 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.443453 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 19:40:06 crc kubenswrapper[4941]: E0227 19:40:06.470257 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.480723 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.547353 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.556768 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.561933 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.620909 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.636494 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.705721 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.710978 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.766843 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.774739 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.809747 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.813635 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.868587 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.883080 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 19:40:06 crc kubenswrapper[4941]: I0227 19:40:06.980666 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.004863 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.050131 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.099086 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.129339 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.134087 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.167607 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.174225 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.221955 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.425416 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.531180 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.568453 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.674410 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.708861 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.791661 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.848437 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.858003 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 19:40:07 crc kubenswrapper[4941]: I0227 19:40:07.968697 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.020741 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.082133 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.083629 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.184360 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.193241 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.197783 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.228454 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.276539 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.311025 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.480071 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.637459 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.674018 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.791198 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.891246 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.906342 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.926024 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.960973 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.974381 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 19:40:08 crc kubenswrapper[4941]: I0227 19:40:08.975626 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.002799 4941 ???:1] "http: TLS handshake error from 192.168.126.11:46740: no serving certificate available for the kubelet" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.072882 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.181755 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.225724 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.262014 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.513399 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.531659 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.554423 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.585256 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.741642 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.753575 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.855454 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 19:40:09 crc kubenswrapper[4941]: I0227 19:40:09.857237 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.087261 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.100962 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.114810 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.172214 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.175404 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.241978 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.244481 4941 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.531703 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.547576 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.559394 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.630674 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.642020 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.658157 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.741430 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 19:40:10 crc kubenswrapper[4941]: I0227 19:40:10.880799 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.023960 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.150090 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.312196 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.396261 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 19:40:11 crc kubenswrapper[4941]: E0227 19:40:11.468750 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:40:11 crc kubenswrapper[4941]: E0227 19:40:11.469259 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.479008 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.548096 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.610206 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.625071 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.653943 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.703160 4941 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.728297 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.855793 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.873134 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 19:40:11 crc kubenswrapper[4941]: I0227 19:40:11.895937 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.118964 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.185916 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.232581 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.260283 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.416399 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.418532 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.423707 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.460914 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.490957 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.536693 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.570167 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.596228 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.619773 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.652772 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.748269 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.757686 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.826620 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 19:40:12 crc kubenswrapper[4941]: I0227 19:40:12.962072 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.009105 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.047741 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.150805 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.170057 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.355772 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.358905 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.409874 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.410794 4941 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.416227 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.514354 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.677243 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.682281 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.688013 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.704728 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.874260 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 19:40:13 crc kubenswrapper[4941]: I0227 19:40:13.950142 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.007102 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.007800 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.196357 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.228640 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.249330 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.273903 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.436282 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.526033 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.537973 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.553723 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.625734 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.762636 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 19:40:14 crc kubenswrapper[4941]: I0227 19:40:14.958427 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.020211 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.022952 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.128812 4941 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.141926 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.146669 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.206869 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.246753 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.297112 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.345277 4941 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.351710 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct" podStartSLOduration=50.351684956 podStartE2EDuration="50.351684956s" podCreationTimestamp="2026-02-27 19:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:39:55.742870202 +0000 UTC m=+314.004010622" watchObservedRunningTime="2026-02-27 19:40:15.351684956 +0000 UTC m=+333.612825406" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.352078 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-99bf458b-6dm2m" podStartSLOduration=51.352034717 podStartE2EDuration="51.352034717s" podCreationTimestamp="2026-02-27 19:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:39:56.752557856 +0000 UTC m=+315.013698276" watchObservedRunningTime="2026-02-27 19:40:15.352034717 +0000 UTC m=+333.613175167" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.354883 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b9bp4","openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.354967 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr","openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 19:40:15 crc kubenswrapper[4941]: E0227 19:40:15.355207 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" containerName="oauth-openshift" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.355232 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" containerName="oauth-openshift" Feb 27 19:40:15 crc kubenswrapper[4941]: E0227 19:40:15.355247 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" containerName="installer" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.355254 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" containerName="installer" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.355377 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" containerName="oauth-openshift" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.355388 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="e21ea716-ea1a-46b4-93be-5be9ca44cccb" containerName="installer" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.355578 4941 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.355617 4941 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62c6948e-2810-45dd-a4b0-eac6107c7799" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.356008 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5975749d64-hdnct","openshift-controller-manager/controller-manager-99bf458b-6dm2m"] Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.356180 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.360235 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.360958 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.361219 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.361350 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.362165 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.362826 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.362930 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.363049 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.363329 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.363376 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.363733 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.363865 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.363881 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.377148 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.377703 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.380419 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.405041 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.405020331 podStartE2EDuration="25.405020331s" podCreationTimestamp="2026-02-27 19:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:40:15.403776783 +0000 UTC m=+333.664917213" watchObservedRunningTime="2026-02-27 19:40:15.405020331 +0000 UTC m=+333.666160751" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.405755 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.405816 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-audit-policies\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.405890 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.405929 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-router-certs\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.405989 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406023 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406094 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406117 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-session\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406136 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcfn7\" (UniqueName: \"kubernetes.io/projected/47e3e4c0-2961-4111-8aa4-eddea94e9fac-kube-api-access-jcfn7\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406163 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-service-ca\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406202 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-error\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406240 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406267 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47e3e4c0-2961-4111-8aa4-eddea94e9fac-audit-dir\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.406319 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-login\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.507646 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.507706 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-router-certs\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.507739 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.507760 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.507787 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.507806 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-session\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.508482 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcfn7\" (UniqueName: \"kubernetes.io/projected/47e3e4c0-2961-4111-8aa4-eddea94e9fac-kube-api-access-jcfn7\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.508503 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-service-ca\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.508526 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-error\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.508551 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.508573 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47e3e4c0-2961-4111-8aa4-eddea94e9fac-audit-dir\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.508604 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-login\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.508633 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.508650 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-audit-policies\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.509035 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47e3e4c0-2961-4111-8aa4-eddea94e9fac-audit-dir\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.509587 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-audit-policies\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.510171 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-service-ca\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.510171 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.510691 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.515329 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.515479 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.515498 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-session\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.518456 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.522706 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-router-certs\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.522733 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-error\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.524286 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.524980 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47e3e4c0-2961-4111-8aa4-eddea94e9fac-v4-0-config-user-template-login\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.545794 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.547395 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcfn7\" (UniqueName: \"kubernetes.io/projected/47e3e4c0-2961-4111-8aa4-eddea94e9fac-kube-api-access-jcfn7\") pod \"oauth-openshift-9b46ffd8b-4p4dr\" (UID: \"47e3e4c0-2961-4111-8aa4-eddea94e9fac\") " pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.565874 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.683989 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:15 crc kubenswrapper[4941]: I0227 19:40:15.696898 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.010994 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537020-dtt9s"] Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.013388 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-dtt9s" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.018520 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-dmspt" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.101908 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.104791 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.116228 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcsdh\" (UniqueName: \"kubernetes.io/projected/a92b6a49-76c7-44fd-8610-9918071ec1ae-kube-api-access-kcsdh\") pod \"auto-csr-approver-29537020-dtt9s\" (UID: \"a92b6a49-76c7-44fd-8610-9918071ec1ae\") " pod="openshift-infra/auto-csr-approver-29537020-dtt9s" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.116644 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.187360 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.199495 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.218055 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcsdh\" (UniqueName: \"kubernetes.io/projected/a92b6a49-76c7-44fd-8610-9918071ec1ae-kube-api-access-kcsdh\") pod \"auto-csr-approver-29537020-dtt9s\" (UID: \"a92b6a49-76c7-44fd-8610-9918071ec1ae\") " pod="openshift-infra/auto-csr-approver-29537020-dtt9s" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.239131 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcsdh\" (UniqueName: \"kubernetes.io/projected/a92b6a49-76c7-44fd-8610-9918071ec1ae-kube-api-access-kcsdh\") pod \"auto-csr-approver-29537020-dtt9s\" (UID: \"a92b6a49-76c7-44fd-8610-9918071ec1ae\") " pod="openshift-infra/auto-csr-approver-29537020-dtt9s" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.262177 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537020-dtt9s"] Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.269771 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr"] Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.324687 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.336621 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-dtt9s" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.418083 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.478350 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815c7d43-e40d-4519-80ce-13df0e8d63ff" path="/var/lib/kubelet/pods/815c7d43-e40d-4519-80ce-13df0e8d63ff/volumes" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.530705 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.595318 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.621702 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.709749 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr"] Feb 27 19:40:16 crc kubenswrapper[4941]: W0227 19:40:16.714201 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47e3e4c0_2961_4111_8aa4_eddea94e9fac.slice/crio-d9f6b4b70d740bbd29fb96809c722715cf591ce396ab4778770f43596ee5cde2 WatchSource:0}: Error finding container d9f6b4b70d740bbd29fb96809c722715cf591ce396ab4778770f43596ee5cde2: Status 404 returned error can't find the container with id d9f6b4b70d740bbd29fb96809c722715cf591ce396ab4778770f43596ee5cde2 Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.749262 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537020-dtt9s"] Feb 27 19:40:16 crc kubenswrapper[4941]: W0227 19:40:16.752040 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92b6a49_76c7_44fd_8610_9918071ec1ae.slice/crio-ba014dbdcb12e13178e68094df2baab7ec539d64fdf4f28cb3f4c968798fb874 WatchSource:0}: Error finding container ba014dbdcb12e13178e68094df2baab7ec539d64fdf4f28cb3f4c968798fb874: Status 404 returned error can't find the container with id ba014dbdcb12e13178e68094df2baab7ec539d64fdf4f28cb3f4c968798fb874 Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.851227 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-dtt9s" event={"ID":"a92b6a49-76c7-44fd-8610-9918071ec1ae","Type":"ContainerStarted","Data":"ba014dbdcb12e13178e68094df2baab7ec539d64fdf4f28cb3f4c968798fb874"} Feb 27 19:40:16 crc kubenswrapper[4941]: I0227 19:40:16.852937 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" event={"ID":"47e3e4c0-2961-4111-8aa4-eddea94e9fac","Type":"ContainerStarted","Data":"d9f6b4b70d740bbd29fb96809c722715cf591ce396ab4778770f43596ee5cde2"} Feb 27 19:40:17 crc kubenswrapper[4941]: I0227 19:40:17.235895 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 19:40:17 crc kubenswrapper[4941]: E0227 19:40:17.469020 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:40:17 crc kubenswrapper[4941]: I0227 19:40:17.858825 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" event={"ID":"47e3e4c0-2961-4111-8aa4-eddea94e9fac","Type":"ContainerStarted","Data":"b56c3cafd1e819442968844e8b7540b82adb40850b3116697b6f352ef3ad0535"} Feb 27 19:40:17 crc kubenswrapper[4941]: I0227 19:40:17.859172 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:17 crc kubenswrapper[4941]: I0227 19:40:17.863483 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" Feb 27 19:40:17 crc kubenswrapper[4941]: I0227 19:40:17.877496 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9b46ffd8b-4p4dr" podStartSLOduration=60.877457567 podStartE2EDuration="1m0.877457567s" podCreationTimestamp="2026-02-27 19:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:40:17.873886516 +0000 UTC m=+336.135026926" watchObservedRunningTime="2026-02-27 19:40:17.877457567 +0000 UTC m=+336.138597987" Feb 27 19:40:18 crc kubenswrapper[4941]: I0227 19:40:18.571024 4941 csr.go:261] certificate signing request csr-2t4s5 is approved, waiting to be issued Feb 27 19:40:18 crc kubenswrapper[4941]: I0227 19:40:18.596401 4941 csr.go:257] certificate signing request csr-2t4s5 is issued Feb 27 19:40:18 crc kubenswrapper[4941]: I0227 19:40:18.866721 4941 generic.go:334] "Generic (PLEG): container finished" podID="a92b6a49-76c7-44fd-8610-9918071ec1ae" containerID="795ef0d3eb8bcaac5780f8d47cc9f00b6eaa24afe889cc1ec62a621daa604867" exitCode=0 Feb 27 19:40:18 crc kubenswrapper[4941]: I0227 19:40:18.867316 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-dtt9s" event={"ID":"a92b6a49-76c7-44fd-8610-9918071ec1ae","Type":"ContainerDied","Data":"795ef0d3eb8bcaac5780f8d47cc9f00b6eaa24afe889cc1ec62a621daa604867"} Feb 27 19:40:19 crc kubenswrapper[4941]: I0227 19:40:19.597729 4941 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-24 22:59:43.323296943 +0000 UTC Feb 27 19:40:19 crc kubenswrapper[4941]: I0227 19:40:19.597792 4941 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6483h19m23.725509712s for next certificate rotation Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.120876 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-dtt9s" Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.170695 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcsdh\" (UniqueName: \"kubernetes.io/projected/a92b6a49-76c7-44fd-8610-9918071ec1ae-kube-api-access-kcsdh\") pod \"a92b6a49-76c7-44fd-8610-9918071ec1ae\" (UID: \"a92b6a49-76c7-44fd-8610-9918071ec1ae\") " Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.176151 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92b6a49-76c7-44fd-8610-9918071ec1ae-kube-api-access-kcsdh" (OuterVolumeSpecName: "kube-api-access-kcsdh") pod "a92b6a49-76c7-44fd-8610-9918071ec1ae" (UID: "a92b6a49-76c7-44fd-8610-9918071ec1ae"). InnerVolumeSpecName "kube-api-access-kcsdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.271862 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcsdh\" (UniqueName: \"kubernetes.io/projected/a92b6a49-76c7-44fd-8610-9918071ec1ae-kube-api-access-kcsdh\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.598771 4941 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-01 19:38:37.904379649 +0000 UTC Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.598821 4941 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7391h58m17.305563464s for next certificate rotation Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.886728 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-dtt9s" Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.886618 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-dtt9s" event={"ID":"a92b6a49-76c7-44fd-8610-9918071ec1ae","Type":"ContainerDied","Data":"ba014dbdcb12e13178e68094df2baab7ec539d64fdf4f28cb3f4c968798fb874"} Feb 27 19:40:20 crc kubenswrapper[4941]: I0227 19:40:20.888570 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba014dbdcb12e13178e68094df2baab7ec539d64fdf4f28cb3f4c968798fb874" Feb 27 19:40:23 crc kubenswrapper[4941]: E0227 19:40:23.470826 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:40:24 crc kubenswrapper[4941]: I0227 19:40:24.870277 4941 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 19:40:24 crc kubenswrapper[4941]: I0227 19:40:24.871718 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a14050cc3ad575814029d2b905bfcc87d44b751e722de0f4ca805e7588a3ffc9" gracePeriod=5 Feb 27 19:40:25 crc kubenswrapper[4941]: E0227 19:40:25.469763 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:40:29 crc kubenswrapper[4941]: I0227 19:40:29.940051 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 19:40:29 crc kubenswrapper[4941]: I0227 19:40:29.940417 4941 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a14050cc3ad575814029d2b905bfcc87d44b751e722de0f4ca805e7588a3ffc9" exitCode=137 Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.443925 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.444040 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.519943 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520037 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520081 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520172 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520198 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520285 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520359 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520397 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520580 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520641 4941 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520664 4941 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.520675 4941 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.528025 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.622445 4941 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.622560 4941 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.949265 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.949340 4941 scope.go:117] "RemoveContainer" containerID="a14050cc3ad575814029d2b905bfcc87d44b751e722de0f4ca805e7588a3ffc9" Feb 27 19:40:30 crc kubenswrapper[4941]: I0227 19:40:30.949456 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 19:40:31 crc kubenswrapper[4941]: E0227 19:40:31.469522 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:40:32 crc kubenswrapper[4941]: I0227 19:40:32.477097 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 27 19:40:35 crc kubenswrapper[4941]: E0227 19:40:35.469956 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:40:39 crc kubenswrapper[4941]: I0227 19:40:39.571227 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 19:40:40 crc kubenswrapper[4941]: E0227 19:40:40.468872 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:40:40 crc kubenswrapper[4941]: I0227 19:40:40.956595 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 19:40:42 crc kubenswrapper[4941]: I0227 19:40:42.286123 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 19:40:43 crc kubenswrapper[4941]: E0227 19:40:43.469567 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:40:43 crc kubenswrapper[4941]: I0227 19:40:43.712243 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 19:40:45 crc kubenswrapper[4941]: I0227 19:40:45.829318 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 19:40:50 crc kubenswrapper[4941]: E0227 19:40:50.471068 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kllrr" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" Feb 27 19:40:50 crc kubenswrapper[4941]: I0227 19:40:50.483334 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 19:40:53 crc kubenswrapper[4941]: E0227 19:40:53.142047 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:40:53 crc kubenswrapper[4941]: E0227 19:40:53.142215 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:40:53 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:40:53 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psgk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537018-k4vjk_openshift-infra(0d98f658-1f8e-41f5-bc4e-2f442243e453): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:40:53 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:40:53 crc kubenswrapper[4941]: E0227 19:40:53.143404 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:40:53 crc kubenswrapper[4941]: E0227 19:40:53.468372 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9kv24" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" Feb 27 19:40:57 crc kubenswrapper[4941]: E0227 19:40:57.468469 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9t78" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.863241 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n48bk"] Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.864039 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n48bk" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerName="registry-server" containerID="cri-o://75a32d4012f15487398d3d803b3055e4fe2ce6b4d4576a8e92159c3f6b47ea81" gracePeriod=30 Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.867460 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9kv24"] Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.881802 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r9t78"] Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.915048 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzs75"] Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.915635 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" podUID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" containerName="marketplace-operator" containerID="cri-o://ddb6c404d693d64f0dbfd5d5a7e00a17e0986cde003c5cf9608f40f9bb75264c" gracePeriod=30 Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.924850 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kllrr"] Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.928979 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bjr9"] Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.929296 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bjr9" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="registry-server" containerID="cri-o://63a678bba0bf66a4eb5e3bc0fc0e3a3d5e7e5748fa4d3e7d38135a6b816a32c0" gracePeriod=30 Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.948982 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl67g"] Feb 27 19:41:01 crc kubenswrapper[4941]: E0227 19:41:01.949239 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92b6a49-76c7-44fd-8610-9918071ec1ae" containerName="oc" Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.949253 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92b6a49-76c7-44fd-8610-9918071ec1ae" containerName="oc" Feb 27 19:41:01 crc kubenswrapper[4941]: E0227 19:41:01.949287 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.949295 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.949422 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.949444 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92b6a49-76c7-44fd-8610-9918071ec1ae" containerName="oc" Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.949969 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:01 crc kubenswrapper[4941]: I0227 19:41:01.955581 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl67g"] Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.027357 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.027421 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.027460 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrb86\" (UniqueName: \"kubernetes.io/projected/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-kube-api-access-lrb86\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.127799 4941 generic.go:334] "Generic (PLEG): container finished" podID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" containerID="ddb6c404d693d64f0dbfd5d5a7e00a17e0986cde003c5cf9608f40f9bb75264c" exitCode=0 Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.127878 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" event={"ID":"97e2661c-8124-4c95-a2c4-deb0e07cb14f","Type":"ContainerDied","Data":"ddb6c404d693d64f0dbfd5d5a7e00a17e0986cde003c5cf9608f40f9bb75264c"} Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.128387 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.128445 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.128505 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrb86\" (UniqueName: \"kubernetes.io/projected/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-kube-api-access-lrb86\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.130427 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.134132 4941 generic.go:334] "Generic (PLEG): container finished" podID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerID="75a32d4012f15487398d3d803b3055e4fe2ce6b4d4576a8e92159c3f6b47ea81" exitCode=0 Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.134186 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48bk" event={"ID":"91aa0e95-3a50-4027-abeb-b8bd2abbcea5","Type":"ContainerDied","Data":"75a32d4012f15487398d3d803b3055e4fe2ce6b4d4576a8e92159c3f6b47ea81"} Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.138894 4941 generic.go:334] "Generic (PLEG): container finished" podID="05415fb4-4075-493f-91c7-a53f30a70618" containerID="63a678bba0bf66a4eb5e3bc0fc0e3a3d5e7e5748fa4d3e7d38135a6b816a32c0" exitCode=0 Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.138920 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bjr9" event={"ID":"05415fb4-4075-493f-91c7-a53f30a70618","Type":"ContainerDied","Data":"63a678bba0bf66a4eb5e3bc0fc0e3a3d5e7e5748fa4d3e7d38135a6b816a32c0"} Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.139262 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.144757 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrb86\" (UniqueName: \"kubernetes.io/projected/4cae2ecf-4f79-4699-8d3e-e10e965eaa7b-kube-api-access-lrb86\") pod \"marketplace-operator-79b997595-sl67g\" (UID: \"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.174688 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.399401 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.533324 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-catalog-content\") pod \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.533367 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-utilities\") pod \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.533443 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xstmx\" (UniqueName: \"kubernetes.io/projected/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-kube-api-access-xstmx\") pod \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\" (UID: \"5d3d1f1c-429f-4fd3-a28d-089c23afbbba\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.534320 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d3d1f1c-429f-4fd3-a28d-089c23afbbba" (UID: "5d3d1f1c-429f-4fd3-a28d-089c23afbbba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.534370 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-utilities" (OuterVolumeSpecName: "utilities") pod "5d3d1f1c-429f-4fd3-a28d-089c23afbbba" (UID: "5d3d1f1c-429f-4fd3-a28d-089c23afbbba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.538116 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-kube-api-access-xstmx" (OuterVolumeSpecName: "kube-api-access-xstmx") pod "5d3d1f1c-429f-4fd3-a28d-089c23afbbba" (UID: "5d3d1f1c-429f-4fd3-a28d-089c23afbbba"). InnerVolumeSpecName "kube-api-access-xstmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.568344 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.614422 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.617237 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.634448 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xstmx\" (UniqueName: \"kubernetes.io/projected/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-kube-api-access-xstmx\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.634496 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.634507 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3d1f1c-429f-4fd3-a28d-089c23afbbba-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.639834 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735066 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfbsl\" (UniqueName: \"kubernetes.io/projected/97e2661c-8124-4c95-a2c4-deb0e07cb14f-kube-api-access-rfbsl\") pod \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735112 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-catalog-content\") pod \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735152 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-operator-metrics\") pod \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735187 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-utilities\") pod \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735212 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-utilities\") pod \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735253 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-catalog-content\") pod \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735271 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-trusted-ca\") pod \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\" (UID: \"97e2661c-8124-4c95-a2c4-deb0e07cb14f\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735302 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6xd6\" (UniqueName: \"kubernetes.io/projected/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-kube-api-access-z6xd6\") pod \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\" (UID: \"81b6db0c-c9b7-4f84-8ec4-c690e0c59788\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735338 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blwb9\" (UniqueName: \"kubernetes.io/projected/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-kube-api-access-blwb9\") pod \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\" (UID: \"bd71dd28-494b-4f92-8cf2-f79b4709c6d5\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735500 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd71dd28-494b-4f92-8cf2-f79b4709c6d5" (UID: "bd71dd28-494b-4f92-8cf2-f79b4709c6d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.735789 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81b6db0c-c9b7-4f84-8ec4-c690e0c59788" (UID: "81b6db0c-c9b7-4f84-8ec4-c690e0c59788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.736126 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-utilities" (OuterVolumeSpecName: "utilities") pod "81b6db0c-c9b7-4f84-8ec4-c690e0c59788" (UID: "81b6db0c-c9b7-4f84-8ec4-c690e0c59788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.736223 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-utilities" (OuterVolumeSpecName: "utilities") pod "bd71dd28-494b-4f92-8cf2-f79b4709c6d5" (UID: "bd71dd28-494b-4f92-8cf2-f79b4709c6d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.736691 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "97e2661c-8124-4c95-a2c4-deb0e07cb14f" (UID: "97e2661c-8124-4c95-a2c4-deb0e07cb14f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.740676 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-kube-api-access-blwb9" (OuterVolumeSpecName: "kube-api-access-blwb9") pod "bd71dd28-494b-4f92-8cf2-f79b4709c6d5" (UID: "bd71dd28-494b-4f92-8cf2-f79b4709c6d5"). InnerVolumeSpecName "kube-api-access-blwb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.740728 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-kube-api-access-z6xd6" (OuterVolumeSpecName: "kube-api-access-z6xd6") pod "81b6db0c-c9b7-4f84-8ec4-c690e0c59788" (UID: "81b6db0c-c9b7-4f84-8ec4-c690e0c59788"). InnerVolumeSpecName "kube-api-access-z6xd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.742626 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "97e2661c-8124-4c95-a2c4-deb0e07cb14f" (UID: "97e2661c-8124-4c95-a2c4-deb0e07cb14f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.742658 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e2661c-8124-4c95-a2c4-deb0e07cb14f-kube-api-access-rfbsl" (OuterVolumeSpecName: "kube-api-access-rfbsl") pod "97e2661c-8124-4c95-a2c4-deb0e07cb14f" (UID: "97e2661c-8124-4c95-a2c4-deb0e07cb14f"). InnerVolumeSpecName "kube-api-access-rfbsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.797050 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl67g"] Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.838296 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-catalog-content\") pod \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.838433 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-utilities\") pod \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.838524 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftrsg\" (UniqueName: \"kubernetes.io/projected/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-kube-api-access-ftrsg\") pod \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\" (UID: \"91aa0e95-3a50-4027-abeb-b8bd2abbcea5\") " Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.838998 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6xd6\" (UniqueName: \"kubernetes.io/projected/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-kube-api-access-z6xd6\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839032 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blwb9\" (UniqueName: \"kubernetes.io/projected/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-kube-api-access-blwb9\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839048 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfbsl\" (UniqueName: \"kubernetes.io/projected/97e2661c-8124-4c95-a2c4-deb0e07cb14f-kube-api-access-rfbsl\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839064 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839077 4941 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839090 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839100 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd71dd28-494b-4f92-8cf2-f79b4709c6d5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839109 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b6db0c-c9b7-4f84-8ec4-c690e0c59788-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839122 4941 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97e2661c-8124-4c95-a2c4-deb0e07cb14f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.839821 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-utilities" (OuterVolumeSpecName: "utilities") pod "91aa0e95-3a50-4027-abeb-b8bd2abbcea5" (UID: "91aa0e95-3a50-4027-abeb-b8bd2abbcea5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.842612 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-kube-api-access-ftrsg" (OuterVolumeSpecName: "kube-api-access-ftrsg") pod "91aa0e95-3a50-4027-abeb-b8bd2abbcea5" (UID: "91aa0e95-3a50-4027-abeb-b8bd2abbcea5"). InnerVolumeSpecName "kube-api-access-ftrsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.884873 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.891563 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91aa0e95-3a50-4027-abeb-b8bd2abbcea5" (UID: "91aa0e95-3a50-4027-abeb-b8bd2abbcea5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.940055 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.940521 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:02 crc kubenswrapper[4941]: I0227 19:41:02.940538 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftrsg\" (UniqueName: \"kubernetes.io/projected/91aa0e95-3a50-4027-abeb-b8bd2abbcea5-kube-api-access-ftrsg\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.041141 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-utilities\") pod \"05415fb4-4075-493f-91c7-a53f30a70618\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.041270 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-catalog-content\") pod \"05415fb4-4075-493f-91c7-a53f30a70618\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.041326 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j5cs\" (UniqueName: \"kubernetes.io/projected/05415fb4-4075-493f-91c7-a53f30a70618-kube-api-access-5j5cs\") pod \"05415fb4-4075-493f-91c7-a53f30a70618\" (UID: \"05415fb4-4075-493f-91c7-a53f30a70618\") " Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.042617 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-utilities" (OuterVolumeSpecName: "utilities") pod "05415fb4-4075-493f-91c7-a53f30a70618" (UID: "05415fb4-4075-493f-91c7-a53f30a70618"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.045664 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05415fb4-4075-493f-91c7-a53f30a70618-kube-api-access-5j5cs" (OuterVolumeSpecName: "kube-api-access-5j5cs") pod "05415fb4-4075-493f-91c7-a53f30a70618" (UID: "05415fb4-4075-493f-91c7-a53f30a70618"). InnerVolumeSpecName "kube-api-access-5j5cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.142356 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j5cs\" (UniqueName: \"kubernetes.io/projected/05415fb4-4075-493f-91c7-a53f30a70618-kube-api-access-5j5cs\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.142387 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.147308 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kllrr" event={"ID":"bd71dd28-494b-4f92-8cf2-f79b4709c6d5","Type":"ContainerDied","Data":"27f28222dcc1d2318165080245f46d50da993b09feddb6060ff5f064396df83e"} Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.147358 4941 scope.go:117] "RemoveContainer" containerID="a6c9e879e88341ddb92d2a27f9febf8bfb0eedc9c34c806c762be6a439f9c827" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.147461 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kllrr" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.152117 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" event={"ID":"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b","Type":"ContainerStarted","Data":"c03c2d33afd28f23b0228b4fe2b3ff7bd0e3b9de0c9e007314bb74b55b824ba5"} Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.152153 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" event={"ID":"4cae2ecf-4f79-4699-8d3e-e10e965eaa7b","Type":"ContainerStarted","Data":"a6f684c2fe2eedf1b372541c05ecc338d5292022c7eb9933573d2f90383020ad"} Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.152349 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.153559 4941 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sl67g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.153591 4941 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" podUID="4cae2ecf-4f79-4699-8d3e-e10e965eaa7b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.157684 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bjr9" event={"ID":"05415fb4-4075-493f-91c7-a53f30a70618","Type":"ContainerDied","Data":"a1bca9ca3b61419f803d8dc44dab2ea336eb4baedf9adebbcf11482b53c6b755"} Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.157766 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bjr9" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.163517 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" event={"ID":"97e2661c-8124-4c95-a2c4-deb0e07cb14f","Type":"ContainerDied","Data":"f9c88c3fe67fa5782e1dd9721d1ba348be0f9f97974009b0e5327384ad52b1a7"} Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.163566 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xzs75" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.164861 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05415fb4-4075-493f-91c7-a53f30a70618" (UID: "05415fb4-4075-493f-91c7-a53f30a70618"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.166611 4941 scope.go:117] "RemoveContainer" containerID="63a678bba0bf66a4eb5e3bc0fc0e3a3d5e7e5748fa4d3e7d38135a6b816a32c0" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.172341 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" podStartSLOduration=2.172326279 podStartE2EDuration="2.172326279s" podCreationTimestamp="2026-02-27 19:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:41:03.168915243 +0000 UTC m=+381.430055663" watchObservedRunningTime="2026-02-27 19:41:03.172326279 +0000 UTC m=+381.433466699" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.179621 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kv24" event={"ID":"81b6db0c-c9b7-4f84-8ec4-c690e0c59788","Type":"ContainerDied","Data":"0284ca04245f00fa0134fe4d1d2e91dd3d225742e2fed48bfb3f5ddeb990eeb9"} Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.179653 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kv24" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.185701 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48bk" event={"ID":"91aa0e95-3a50-4027-abeb-b8bd2abbcea5","Type":"ContainerDied","Data":"b85e592286db17c405ca2bc2041d5cc4e9bedcdfa9c87e3f7cad074d700864e8"} Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.185728 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n48bk" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.190842 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9t78" event={"ID":"5d3d1f1c-429f-4fd3-a28d-089c23afbbba","Type":"ContainerDied","Data":"cadf5f0eea5854df85489cd50e248ba036beb03362362be2b09daa2cdeb61502"} Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.191047 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r9t78" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.209513 4941 scope.go:117] "RemoveContainer" containerID="52d53d2e4a2038beba916ed8d057d55cffbdefd6224dcc41d4ce4007b0829d8f" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.221392 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kllrr"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.242490 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kllrr"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.242950 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05415fb4-4075-493f-91c7-a53f30a70618-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.261698 4941 scope.go:117] "RemoveContainer" containerID="2d7728a079399b7b5244d50ba06fa0652669b8038f0252dd5e0697d4db395a96" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.263902 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzs75"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.269574 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xzs75"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.281422 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9kv24"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.287673 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9kv24"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.291236 4941 scope.go:117] "RemoveContainer" containerID="ddb6c404d693d64f0dbfd5d5a7e00a17e0986cde003c5cf9608f40f9bb75264c" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.296862 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r9t78"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.306703 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r9t78"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.313837 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n48bk"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.313989 4941 scope.go:117] "RemoveContainer" containerID="d6c564b158435c2dc89b374a38b50666783aa1713a04c16dc6a9da8bf5bd9c88" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.316349 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n48bk"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.326388 4941 scope.go:117] "RemoveContainer" containerID="75a32d4012f15487398d3d803b3055e4fe2ce6b4d4576a8e92159c3f6b47ea81" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.336422 4941 scope.go:117] "RemoveContainer" containerID="cd345431c7d3fc2bf96b0657fb57454fecb6a77ac08cc0e481421d0caca5ea4a" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.351873 4941 scope.go:117] "RemoveContainer" containerID="4bd71ffc409bb4262bbee3ed86d1540e4c701bc0ec9a8273559d3dd218e83c22" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.363956 4941 scope.go:117] "RemoveContainer" containerID="e8de783b39b847a1ad55382f0a1fb125e965eeeccb613b6f02d24a5107d6f64d" Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.487240 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bjr9"] Feb 27 19:41:03 crc kubenswrapper[4941]: I0227 19:41:03.490913 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bjr9"] Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.217859 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sl67g" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.473658 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05415fb4-4075-493f-91c7-a53f30a70618" path="/var/lib/kubelet/pods/05415fb4-4075-493f-91c7-a53f30a70618/volumes" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.474422 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" path="/var/lib/kubelet/pods/5d3d1f1c-429f-4fd3-a28d-089c23afbbba/volumes" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.474856 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" path="/var/lib/kubelet/pods/81b6db0c-c9b7-4f84-8ec4-c690e0c59788/volumes" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.475273 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" path="/var/lib/kubelet/pods/91aa0e95-3a50-4027-abeb-b8bd2abbcea5/volumes" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.475894 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" path="/var/lib/kubelet/pods/97e2661c-8124-4c95-a2c4-deb0e07cb14f/volumes" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.476360 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" path="/var/lib/kubelet/pods/bd71dd28-494b-4f92-8cf2-f79b4709c6d5/volumes" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.674735 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkqp2"] Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675006 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675021 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675034 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675041 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675052 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerName="extract-content" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675060 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerName="extract-content" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675070 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675078 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675090 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675098 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675106 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="extract-content" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675114 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="extract-content" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675128 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" containerName="marketplace-operator" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675136 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" containerName="marketplace-operator" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675150 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerName="registry-server" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675159 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerName="registry-server" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675171 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="registry-server" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675180 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="registry-server" Feb 27 19:41:04 crc kubenswrapper[4941]: E0227 19:41:04.675189 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675196 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675323 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd71dd28-494b-4f92-8cf2-f79b4709c6d5" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675338 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e2661c-8124-4c95-a2c4-deb0e07cb14f" containerName="marketplace-operator" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675350 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b6db0c-c9b7-4f84-8ec4-c690e0c59788" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675361 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3d1f1c-429f-4fd3-a28d-089c23afbbba" containerName="extract-utilities" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675373 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="91aa0e95-3a50-4027-abeb-b8bd2abbcea5" containerName="registry-server" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.675383 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="05415fb4-4075-493f-91c7-a53f30a70618" containerName="registry-server" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.676296 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.678768 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.691237 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkqp2"] Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.788176 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/048b2614-045b-4bed-89ef-8554c574f3e6-utilities\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.788262 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7nt\" (UniqueName: \"kubernetes.io/projected/048b2614-045b-4bed-89ef-8554c574f3e6-kube-api-access-tp7nt\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.788291 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/048b2614-045b-4bed-89ef-8554c574f3e6-catalog-content\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.889868 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/048b2614-045b-4bed-89ef-8554c574f3e6-utilities\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.889980 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7nt\" (UniqueName: \"kubernetes.io/projected/048b2614-045b-4bed-89ef-8554c574f3e6-kube-api-access-tp7nt\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.890015 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/048b2614-045b-4bed-89ef-8554c574f3e6-catalog-content\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.890533 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/048b2614-045b-4bed-89ef-8554c574f3e6-catalog-content\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.892293 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/048b2614-045b-4bed-89ef-8554c574f3e6-utilities\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:04 crc kubenswrapper[4941]: I0227 19:41:04.911805 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7nt\" (UniqueName: \"kubernetes.io/projected/048b2614-045b-4bed-89ef-8554c574f3e6-kube-api-access-tp7nt\") pod \"redhat-marketplace-vkqp2\" (UID: \"048b2614-045b-4bed-89ef-8554c574f3e6\") " pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.004296 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.432185 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkqp2"] Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.669849 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b25c9"] Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.670942 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.672889 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.681713 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b25c9"] Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.698320 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-utilities\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.698374 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-catalog-content\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.698400 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rzn\" (UniqueName: \"kubernetes.io/projected/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-kube-api-access-66rzn\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: E0227 19:41:05.712877 4941 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod048b2614_045b_4bed_89ef_8554c574f3e6.slice/crio-conmon-d6dbdacd9d306f7a6b1be37991e9bde51a5551a5e919af963386f37069689fa9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod048b2614_045b_4bed_89ef_8554c574f3e6.slice/crio-d6dbdacd9d306f7a6b1be37991e9bde51a5551a5e919af963386f37069689fa9.scope\": RecentStats: unable to find data in memory cache]" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.799202 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66rzn\" (UniqueName: \"kubernetes.io/projected/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-kube-api-access-66rzn\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.799694 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-utilities\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.800150 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-utilities\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.801087 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-catalog-content\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.801422 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-catalog-content\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.819836 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66rzn\" (UniqueName: \"kubernetes.io/projected/5b6198dd-a465-4ed8-b4d1-b31c1cf9a266-kube-api-access-66rzn\") pod \"redhat-operators-b25c9\" (UID: \"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266\") " pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:05 crc kubenswrapper[4941]: I0227 19:41:05.994093 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:06 crc kubenswrapper[4941]: I0227 19:41:06.223564 4941 generic.go:334] "Generic (PLEG): container finished" podID="048b2614-045b-4bed-89ef-8554c574f3e6" containerID="d6dbdacd9d306f7a6b1be37991e9bde51a5551a5e919af963386f37069689fa9" exitCode=0 Feb 27 19:41:06 crc kubenswrapper[4941]: I0227 19:41:06.223611 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkqp2" event={"ID":"048b2614-045b-4bed-89ef-8554c574f3e6","Type":"ContainerDied","Data":"d6dbdacd9d306f7a6b1be37991e9bde51a5551a5e919af963386f37069689fa9"} Feb 27 19:41:06 crc kubenswrapper[4941]: I0227 19:41:06.223641 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkqp2" event={"ID":"048b2614-045b-4bed-89ef-8554c574f3e6","Type":"ContainerStarted","Data":"c4d86673e017e3e4ef3b3249e78cead83c12791b6490f9097911e5cd74f617e9"} Feb 27 19:41:06 crc kubenswrapper[4941]: I0227 19:41:06.358380 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b25c9"] Feb 27 19:41:06 crc kubenswrapper[4941]: W0227 19:41:06.364704 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6198dd_a465_4ed8_b4d1_b31c1cf9a266.slice/crio-892284c15119debf1144cd749de0bbe2334bd4586f49399927fd85f5ed07c74f WatchSource:0}: Error finding container 892284c15119debf1144cd749de0bbe2334bd4586f49399927fd85f5ed07c74f: Status 404 returned error can't find the container with id 892284c15119debf1144cd749de0bbe2334bd4586f49399927fd85f5ed07c74f Feb 27 19:41:06 crc kubenswrapper[4941]: E0227 19:41:06.834548 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:41:06 crc kubenswrapper[4941]: E0227 19:41:06.834691 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp7nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vkqp2_openshift-marketplace(048b2614-045b-4bed-89ef-8554c574f3e6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:41:06 crc kubenswrapper[4941]: E0227 19:41:06.835858 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.073845 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f87fb"] Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.089517 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.092387 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.106044 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f87fb"] Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.217255 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f053baa0-dc63-462c-921e-385f02bda750-catalog-content\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.217308 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgwlw\" (UniqueName: \"kubernetes.io/projected/f053baa0-dc63-462c-921e-385f02bda750-kube-api-access-sgwlw\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.217417 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f053baa0-dc63-462c-921e-385f02bda750-utilities\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.229466 4941 generic.go:334] "Generic (PLEG): container finished" podID="5b6198dd-a465-4ed8-b4d1-b31c1cf9a266" containerID="af0b2048dc5e5a0390e914bfd86f095eec25c5bd49c04dfc5f8f6df7905a7a77" exitCode=0 Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.229584 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b25c9" event={"ID":"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266","Type":"ContainerDied","Data":"af0b2048dc5e5a0390e914bfd86f095eec25c5bd49c04dfc5f8f6df7905a7a77"} Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.229627 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b25c9" event={"ID":"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266","Type":"ContainerStarted","Data":"892284c15119debf1144cd749de0bbe2334bd4586f49399927fd85f5ed07c74f"} Feb 27 19:41:07 crc kubenswrapper[4941]: E0227 19:41:07.231208 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.318580 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f053baa0-dc63-462c-921e-385f02bda750-utilities\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.318684 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f053baa0-dc63-462c-921e-385f02bda750-catalog-content\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.318723 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgwlw\" (UniqueName: \"kubernetes.io/projected/f053baa0-dc63-462c-921e-385f02bda750-kube-api-access-sgwlw\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.319755 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f053baa0-dc63-462c-921e-385f02bda750-catalog-content\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.319755 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f053baa0-dc63-462c-921e-385f02bda750-utilities\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.337350 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgwlw\" (UniqueName: \"kubernetes.io/projected/f053baa0-dc63-462c-921e-385f02bda750-kube-api-access-sgwlw\") pod \"community-operators-f87fb\" (UID: \"f053baa0-dc63-462c-921e-385f02bda750\") " pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.416423 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:07 crc kubenswrapper[4941]: E0227 19:41:07.470305 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:41:07 crc kubenswrapper[4941]: I0227 19:41:07.825062 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f87fb"] Feb 27 19:41:07 crc kubenswrapper[4941]: W0227 19:41:07.831346 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf053baa0_dc63_462c_921e_385f02bda750.slice/crio-2a5affa9921fa9820fa70094579b27dea8a3452d0f5946cad76a84eb3ed7770a WatchSource:0}: Error finding container 2a5affa9921fa9820fa70094579b27dea8a3452d0f5946cad76a84eb3ed7770a: Status 404 returned error can't find the container with id 2a5affa9921fa9820fa70094579b27dea8a3452d0f5946cad76a84eb3ed7770a Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.074589 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zs7bf"] Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.076281 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.078922 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.083127 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zs7bf"] Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.229571 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1737ca02-aded-4254-b433-aac4a9ccad71-utilities\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.229627 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bndzc\" (UniqueName: \"kubernetes.io/projected/1737ca02-aded-4254-b433-aac4a9ccad71-kube-api-access-bndzc\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.229695 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1737ca02-aded-4254-b433-aac4a9ccad71-catalog-content\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.237006 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b25c9" event={"ID":"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266","Type":"ContainerStarted","Data":"b5286defe1ca5b3d138b88afd29face017d153f8f1d4935653e64ade827d8006"} Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.238362 4941 generic.go:334] "Generic (PLEG): container finished" podID="f053baa0-dc63-462c-921e-385f02bda750" containerID="323c1e3362c306d133bce54d78788e688b7ef85bc45284c08dcbcd18bb48155f" exitCode=0 Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.238394 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f87fb" event={"ID":"f053baa0-dc63-462c-921e-385f02bda750","Type":"ContainerDied","Data":"323c1e3362c306d133bce54d78788e688b7ef85bc45284c08dcbcd18bb48155f"} Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.238422 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f87fb" event={"ID":"f053baa0-dc63-462c-921e-385f02bda750","Type":"ContainerStarted","Data":"2a5affa9921fa9820fa70094579b27dea8a3452d0f5946cad76a84eb3ed7770a"} Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.331444 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1737ca02-aded-4254-b433-aac4a9ccad71-catalog-content\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.331829 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1737ca02-aded-4254-b433-aac4a9ccad71-utilities\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.331888 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1737ca02-aded-4254-b433-aac4a9ccad71-catalog-content\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.331912 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bndzc\" (UniqueName: \"kubernetes.io/projected/1737ca02-aded-4254-b433-aac4a9ccad71-kube-api-access-bndzc\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.332084 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1737ca02-aded-4254-b433-aac4a9ccad71-utilities\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.350354 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bndzc\" (UniqueName: \"kubernetes.io/projected/1737ca02-aded-4254-b433-aac4a9ccad71-kube-api-access-bndzc\") pod \"certified-operators-zs7bf\" (UID: \"1737ca02-aded-4254-b433-aac4a9ccad71\") " pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.398201 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:41:08 crc kubenswrapper[4941]: I0227 19:41:08.635984 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zs7bf"] Feb 27 19:41:08 crc kubenswrapper[4941]: W0227 19:41:08.644703 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1737ca02_aded_4254_b433_aac4a9ccad71.slice/crio-ae6596db79439ceef90d0057e64077919249696c5771ca5a4cb8e94557703dd2 WatchSource:0}: Error finding container ae6596db79439ceef90d0057e64077919249696c5771ca5a4cb8e94557703dd2: Status 404 returned error can't find the container with id ae6596db79439ceef90d0057e64077919249696c5771ca5a4cb8e94557703dd2 Feb 27 19:41:09 crc kubenswrapper[4941]: I0227 19:41:09.245913 4941 generic.go:334] "Generic (PLEG): container finished" podID="1737ca02-aded-4254-b433-aac4a9ccad71" containerID="923115df45cc2041c4f3eadb4cad7f03b672a9e9770332b2c531956c2dfbeff1" exitCode=0 Feb 27 19:41:09 crc kubenswrapper[4941]: I0227 19:41:09.245985 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7bf" event={"ID":"1737ca02-aded-4254-b433-aac4a9ccad71","Type":"ContainerDied","Data":"923115df45cc2041c4f3eadb4cad7f03b672a9e9770332b2c531956c2dfbeff1"} Feb 27 19:41:09 crc kubenswrapper[4941]: I0227 19:41:09.246014 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7bf" event={"ID":"1737ca02-aded-4254-b433-aac4a9ccad71","Type":"ContainerStarted","Data":"ae6596db79439ceef90d0057e64077919249696c5771ca5a4cb8e94557703dd2"} Feb 27 19:41:09 crc kubenswrapper[4941]: I0227 19:41:09.249549 4941 generic.go:334] "Generic (PLEG): container finished" podID="5b6198dd-a465-4ed8-b4d1-b31c1cf9a266" containerID="b5286defe1ca5b3d138b88afd29face017d153f8f1d4935653e64ade827d8006" exitCode=0 Feb 27 19:41:09 crc kubenswrapper[4941]: I0227 19:41:09.249605 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b25c9" event={"ID":"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266","Type":"ContainerDied","Data":"b5286defe1ca5b3d138b88afd29face017d153f8f1d4935653e64ade827d8006"} Feb 27 19:41:09 crc kubenswrapper[4941]: E0227 19:41:09.866094 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:41:09 crc kubenswrapper[4941]: E0227 19:41:09.867062 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bndzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zs7bf_openshift-marketplace(1737ca02-aded-4254-b433-aac4a9ccad71): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:41:09 crc kubenswrapper[4941]: E0227 19:41:09.870139 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:41:10 crc kubenswrapper[4941]: I0227 19:41:10.256425 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b25c9" event={"ID":"5b6198dd-a465-4ed8-b4d1-b31c1cf9a266","Type":"ContainerStarted","Data":"f0fa50e1184494cf823bce603bf43f795f7e0fa98a8dbf7ad426eecc725f1d70"} Feb 27 19:41:10 crc kubenswrapper[4941]: E0227 19:41:10.258861 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:41:10 crc kubenswrapper[4941]: I0227 19:41:10.278121 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b25c9" podStartSLOduration=2.839116319 podStartE2EDuration="5.278103285s" podCreationTimestamp="2026-02-27 19:41:05 +0000 UTC" firstStartedPulling="2026-02-27 19:41:07.231150592 +0000 UTC m=+385.492291032" lastFinishedPulling="2026-02-27 19:41:09.670137578 +0000 UTC m=+387.931277998" observedRunningTime="2026-02-27 19:41:10.274380959 +0000 UTC m=+388.535521389" watchObservedRunningTime="2026-02-27 19:41:10.278103285 +0000 UTC m=+388.539243705" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.265849 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s5fqp"] Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.267313 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.277600 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s5fqp"] Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.396082 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2a8573a8-7ad0-4d2f-abe7-017fc647efce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.396135 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfzl\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-kube-api-access-pvfzl\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.396176 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.396195 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a8573a8-7ad0-4d2f-abe7-017fc647efce-trusted-ca\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.396221 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2a8573a8-7ad0-4d2f-abe7-017fc647efce-registry-certificates\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.396256 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-bound-sa-token\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.396454 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-registry-tls\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.396580 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2a8573a8-7ad0-4d2f-abe7-017fc647efce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.414913 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.498134 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2a8573a8-7ad0-4d2f-abe7-017fc647efce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.498231 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2a8573a8-7ad0-4d2f-abe7-017fc647efce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.498270 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvfzl\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-kube-api-access-pvfzl\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.498307 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a8573a8-7ad0-4d2f-abe7-017fc647efce-trusted-ca\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.498352 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2a8573a8-7ad0-4d2f-abe7-017fc647efce-registry-certificates\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.498406 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-bound-sa-token\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.498460 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-registry-tls\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.498967 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2a8573a8-7ad0-4d2f-abe7-017fc647efce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.499870 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2a8573a8-7ad0-4d2f-abe7-017fc647efce-registry-certificates\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.500877 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a8573a8-7ad0-4d2f-abe7-017fc647efce-trusted-ca\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.505008 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-registry-tls\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.510548 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2a8573a8-7ad0-4d2f-abe7-017fc647efce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.517543 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvfzl\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-kube-api-access-pvfzl\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.527219 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a8573a8-7ad0-4d2f-abe7-017fc647efce-bound-sa-token\") pod \"image-registry-66df7c8f76-s5fqp\" (UID: \"2a8573a8-7ad0-4d2f-abe7-017fc647efce\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:11 crc kubenswrapper[4941]: I0227 19:41:11.590561 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:13 crc kubenswrapper[4941]: I0227 19:41:13.051939 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s5fqp"] Feb 27 19:41:13 crc kubenswrapper[4941]: I0227 19:41:13.291375 4941 generic.go:334] "Generic (PLEG): container finished" podID="f053baa0-dc63-462c-921e-385f02bda750" containerID="5bdc6832c36e7c37ae34ead472d02e6c95d142297f399c6a225a8a35ff988215" exitCode=0 Feb 27 19:41:13 crc kubenswrapper[4941]: I0227 19:41:13.291510 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f87fb" event={"ID":"f053baa0-dc63-462c-921e-385f02bda750","Type":"ContainerDied","Data":"5bdc6832c36e7c37ae34ead472d02e6c95d142297f399c6a225a8a35ff988215"} Feb 27 19:41:13 crc kubenswrapper[4941]: I0227 19:41:13.294349 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" event={"ID":"2a8573a8-7ad0-4d2f-abe7-017fc647efce","Type":"ContainerStarted","Data":"6be9bf77d4ffa38ae28126ea54160ccbf586dfdc4a9a5d1b6903015bd1fddebf"} Feb 27 19:41:13 crc kubenswrapper[4941]: I0227 19:41:13.294399 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" event={"ID":"2a8573a8-7ad0-4d2f-abe7-017fc647efce","Type":"ContainerStarted","Data":"5b7bb7dfba745d631e1d5fb26da3828ed16501a28ad174c359c8b2a16a982128"} Feb 27 19:41:13 crc kubenswrapper[4941]: I0227 19:41:13.294530 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:13 crc kubenswrapper[4941]: I0227 19:41:13.338147 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" podStartSLOduration=2.338120473 podStartE2EDuration="2.338120473s" podCreationTimestamp="2026-02-27 19:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:41:13.336842143 +0000 UTC m=+391.597982583" watchObservedRunningTime="2026-02-27 19:41:13.338120473 +0000 UTC m=+391.599260933" Feb 27 19:41:14 crc kubenswrapper[4941]: I0227 19:41:14.300037 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f87fb" event={"ID":"f053baa0-dc63-462c-921e-385f02bda750","Type":"ContainerStarted","Data":"54a274410dcf45d723a83163d20a2da3bc926770d9a8a7e1de16038be46caad0"} Feb 27 19:41:14 crc kubenswrapper[4941]: I0227 19:41:14.321795 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f87fb" podStartSLOduration=1.5107704960000001 podStartE2EDuration="7.321778449s" podCreationTimestamp="2026-02-27 19:41:07 +0000 UTC" firstStartedPulling="2026-02-27 19:41:08.240796554 +0000 UTC m=+386.501936974" lastFinishedPulling="2026-02-27 19:41:14.051804497 +0000 UTC m=+392.312944927" observedRunningTime="2026-02-27 19:41:14.321536002 +0000 UTC m=+392.582676442" watchObservedRunningTime="2026-02-27 19:41:14.321778449 +0000 UTC m=+392.582918869" Feb 27 19:41:15 crc kubenswrapper[4941]: I0227 19:41:15.995280 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:15 crc kubenswrapper[4941]: I0227 19:41:15.995681 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:16 crc kubenswrapper[4941]: I0227 19:41:16.036644 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:16 crc kubenswrapper[4941]: I0227 19:41:16.364718 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b25c9" Feb 27 19:41:17 crc kubenswrapper[4941]: I0227 19:41:17.417358 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:17 crc kubenswrapper[4941]: I0227 19:41:17.417423 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:17 crc kubenswrapper[4941]: I0227 19:41:17.463598 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:19 crc kubenswrapper[4941]: E0227 19:41:19.469598 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:41:21 crc kubenswrapper[4941]: E0227 19:41:21.999127 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:41:22 crc kubenswrapper[4941]: E0227 19:41:21.999643 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp7nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vkqp2_openshift-marketplace(048b2614-045b-4bed-89ef-8554c574f3e6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:41:22 crc kubenswrapper[4941]: E0227 19:41:22.000925 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:41:22 crc kubenswrapper[4941]: E0227 19:41:22.064121 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:41:22 crc kubenswrapper[4941]: E0227 19:41:22.064573 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bndzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zs7bf_openshift-marketplace(1737ca02-aded-4254-b433-aac4a9ccad71): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:41:22 crc kubenswrapper[4941]: E0227 19:41:22.066306 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:41:27 crc kubenswrapper[4941]: I0227 19:41:27.468422 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f87fb" Feb 27 19:41:31 crc kubenswrapper[4941]: I0227 19:41:31.595338 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-s5fqp" Feb 27 19:41:31 crc kubenswrapper[4941]: I0227 19:41:31.649863 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2h46l"] Feb 27 19:41:33 crc kubenswrapper[4941]: E0227 19:41:33.469018 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:41:35 crc kubenswrapper[4941]: E0227 19:41:35.470854 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:41:36 crc kubenswrapper[4941]: E0227 19:41:36.468399 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:41:45 crc kubenswrapper[4941]: E0227 19:41:45.468035 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:41:48 crc kubenswrapper[4941]: E0227 19:41:48.018165 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:41:48 crc kubenswrapper[4941]: E0227 19:41:48.019657 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bndzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zs7bf_openshift-marketplace(1737ca02-aded-4254-b433-aac4a9ccad71): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:41:48 crc kubenswrapper[4941]: E0227 19:41:48.021222 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:41:50 crc kubenswrapper[4941]: E0227 19:41:50.236258 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:41:50 crc kubenswrapper[4941]: E0227 19:41:50.236748 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp7nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vkqp2_openshift-marketplace(048b2614-045b-4bed-89ef-8554c574f3e6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:41:50 crc kubenswrapper[4941]: E0227 19:41:50.237973 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:41:56 crc kubenswrapper[4941]: I0227 19:41:56.690526 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" podUID="1e15fd8c-4806-428d-ab5a-d9e99c669eaa" containerName="registry" containerID="cri-o://354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9" gracePeriod=30 Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.034461 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.109605 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-certificates\") pod \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.109649 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-bound-sa-token\") pod \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.109694 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-trusted-ca\") pod \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.109725 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-tls\") pod \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.109755 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnkqw\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-kube-api-access-pnkqw\") pod \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.109856 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.109887 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-ca-trust-extracted\") pod \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.109905 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-installation-pull-secrets\") pod \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\" (UID: \"1e15fd8c-4806-428d-ab5a-d9e99c669eaa\") " Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.111397 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1e15fd8c-4806-428d-ab5a-d9e99c669eaa" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.112208 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1e15fd8c-4806-428d-ab5a-d9e99c669eaa" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.117441 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-kube-api-access-pnkqw" (OuterVolumeSpecName: "kube-api-access-pnkqw") pod "1e15fd8c-4806-428d-ab5a-d9e99c669eaa" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa"). InnerVolumeSpecName "kube-api-access-pnkqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.117763 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1e15fd8c-4806-428d-ab5a-d9e99c669eaa" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.118127 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1e15fd8c-4806-428d-ab5a-d9e99c669eaa" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.118399 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1e15fd8c-4806-428d-ab5a-d9e99c669eaa" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.121575 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1e15fd8c-4806-428d-ab5a-d9e99c669eaa" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.127099 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1e15fd8c-4806-428d-ab5a-d9e99c669eaa" (UID: "1e15fd8c-4806-428d-ab5a-d9e99c669eaa"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.210724 4941 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.210747 4941 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.210756 4941 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.210765 4941 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.210773 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnkqw\" (UniqueName: \"kubernetes.io/projected/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-kube-api-access-pnkqw\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.210780 4941 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.210791 4941 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e15fd8c-4806-428d-ab5a-d9e99c669eaa-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.560248 4941 generic.go:334] "Generic (PLEG): container finished" podID="1e15fd8c-4806-428d-ab5a-d9e99c669eaa" containerID="354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9" exitCode=0 Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.560297 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" event={"ID":"1e15fd8c-4806-428d-ab5a-d9e99c669eaa","Type":"ContainerDied","Data":"354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9"} Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.560330 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" event={"ID":"1e15fd8c-4806-428d-ab5a-d9e99c669eaa","Type":"ContainerDied","Data":"7aaa1992de4184b21ae463202336db8f0a126e7d4628397fa53cadd1864772c7"} Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.560352 4941 scope.go:117] "RemoveContainer" containerID="354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.560400 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2h46l" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.584754 4941 scope.go:117] "RemoveContainer" containerID="354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9" Feb 27 19:41:57 crc kubenswrapper[4941]: E0227 19:41:57.585334 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9\": container with ID starting with 354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9 not found: ID does not exist" containerID="354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.585384 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9"} err="failed to get container status \"354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9\": rpc error: code = NotFound desc = could not find container \"354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9\": container with ID starting with 354931583debb1fb45b2e189e652d00f50ae91e429d5d65b76f85966af3fbeb9 not found: ID does not exist" Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.595587 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2h46l"] Feb 27 19:41:57 crc kubenswrapper[4941]: I0227 19:41:57.607130 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2h46l"] Feb 27 19:41:58 crc kubenswrapper[4941]: E0227 19:41:58.469442 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:41:58 crc kubenswrapper[4941]: E0227 19:41:58.469909 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:41:58 crc kubenswrapper[4941]: I0227 19:41:58.473319 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e15fd8c-4806-428d-ab5a-d9e99c669eaa" path="/var/lib/kubelet/pods/1e15fd8c-4806-428d-ab5a-d9e99c669eaa/volumes" Feb 27 19:41:59 crc kubenswrapper[4941]: I0227 19:41:59.850737 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:41:59 crc kubenswrapper[4941]: I0227 19:41:59.850818 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.131196 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537022-s2ljb"] Feb 27 19:42:00 crc kubenswrapper[4941]: E0227 19:42:00.131404 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e15fd8c-4806-428d-ab5a-d9e99c669eaa" containerName="registry" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.131416 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e15fd8c-4806-428d-ab5a-d9e99c669eaa" containerName="registry" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.131521 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e15fd8c-4806-428d-ab5a-d9e99c669eaa" containerName="registry" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.131850 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.133741 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-dmspt" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.138948 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537022-s2ljb"] Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.256491 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7m7\" (UniqueName: \"kubernetes.io/projected/39ecd957-4632-4dfb-9f87-28a2a83197ad-kube-api-access-mp7m7\") pod \"auto-csr-approver-29537022-s2ljb\" (UID: \"39ecd957-4632-4dfb-9f87-28a2a83197ad\") " pod="openshift-infra/auto-csr-approver-29537022-s2ljb" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.357384 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7m7\" (UniqueName: \"kubernetes.io/projected/39ecd957-4632-4dfb-9f87-28a2a83197ad-kube-api-access-mp7m7\") pod \"auto-csr-approver-29537022-s2ljb\" (UID: \"39ecd957-4632-4dfb-9f87-28a2a83197ad\") " pod="openshift-infra/auto-csr-approver-29537022-s2ljb" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.375263 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7m7\" (UniqueName: \"kubernetes.io/projected/39ecd957-4632-4dfb-9f87-28a2a83197ad-kube-api-access-mp7m7\") pod \"auto-csr-approver-29537022-s2ljb\" (UID: \"39ecd957-4632-4dfb-9f87-28a2a83197ad\") " pod="openshift-infra/auto-csr-approver-29537022-s2ljb" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.488707 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" Feb 27 19:42:00 crc kubenswrapper[4941]: I0227 19:42:00.879344 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537022-s2ljb"] Feb 27 19:42:01 crc kubenswrapper[4941]: I0227 19:42:01.585380 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" event={"ID":"39ecd957-4632-4dfb-9f87-28a2a83197ad","Type":"ContainerStarted","Data":"693bca2142072fcf8657d4a990bb30322a1e92ae3ff9a139501ed2e8f8a2d4aa"} Feb 27 19:42:01 crc kubenswrapper[4941]: E0227 19:42:01.826390 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:01 crc kubenswrapper[4941]: E0227 19:42:01.826803 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:01 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:01 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp7m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-s2ljb_openshift-infra(39ecd957-4632-4dfb-9f87-28a2a83197ad): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:42:01 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:42:01 crc kubenswrapper[4941]: E0227 19:42:01.831039 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:42:02 crc kubenswrapper[4941]: E0227 19:42:02.592070 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:42:04 crc kubenswrapper[4941]: E0227 19:42:04.469203 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:42:11 crc kubenswrapper[4941]: E0227 19:42:11.468758 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:42:12 crc kubenswrapper[4941]: E0227 19:42:12.478684 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:42:14 crc kubenswrapper[4941]: E0227 19:42:14.452802 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:14 crc kubenswrapper[4941]: E0227 19:42:14.452958 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:14 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:14 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp7m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-s2ljb_openshift-infra(39ecd957-4632-4dfb-9f87-28a2a83197ad): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:42:14 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:42:14 crc kubenswrapper[4941]: E0227 19:42:14.454172 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:42:18 crc kubenswrapper[4941]: E0227 19:42:18.470449 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:42:23 crc kubenswrapper[4941]: E0227 19:42:23.416819 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:23 crc kubenswrapper[4941]: E0227 19:42:23.417045 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:23 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:23 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psgk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537018-k4vjk_openshift-infra(0d98f658-1f8e-41f5-bc4e-2f442243e453): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:42:23 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:42:23 crc kubenswrapper[4941]: E0227 19:42:23.419094 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:42:26 crc kubenswrapper[4941]: E0227 19:42:26.469696 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:42:26 crc kubenswrapper[4941]: E0227 19:42:26.469851 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:42:29 crc kubenswrapper[4941]: I0227 19:42:29.851565 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:42:29 crc kubenswrapper[4941]: I0227 19:42:29.851924 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:42:31 crc kubenswrapper[4941]: E0227 19:42:31.186395 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:42:31 crc kubenswrapper[4941]: E0227 19:42:31.186668 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp7nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vkqp2_openshift-marketplace(048b2614-045b-4bed-89ef-8554c574f3e6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:42:31 crc kubenswrapper[4941]: E0227 19:42:31.188672 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:42:35 crc kubenswrapper[4941]: E0227 19:42:35.469127 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:42:39 crc kubenswrapper[4941]: E0227 19:42:39.096350 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:42:39 crc kubenswrapper[4941]: E0227 19:42:39.097022 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bndzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zs7bf_openshift-marketplace(1737ca02-aded-4254-b433-aac4a9ccad71): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:42:39 crc kubenswrapper[4941]: E0227 19:42:39.098252 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:42:42 crc kubenswrapper[4941]: E0227 19:42:42.348595 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:42 crc kubenswrapper[4941]: E0227 19:42:42.348807 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:42 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:42 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp7m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-s2ljb_openshift-infra(39ecd957-4632-4dfb-9f87-28a2a83197ad): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:42:42 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:42:42 crc kubenswrapper[4941]: E0227 19:42:42.350440 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:42:46 crc kubenswrapper[4941]: E0227 19:42:46.468309 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:42:50 crc kubenswrapper[4941]: E0227 19:42:50.469579 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:42:52 crc kubenswrapper[4941]: E0227 19:42:52.474184 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:42:57 crc kubenswrapper[4941]: E0227 19:42:57.469339 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:42:57 crc kubenswrapper[4941]: E0227 19:42:57.469718 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:42:59 crc kubenswrapper[4941]: I0227 19:42:59.851077 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:42:59 crc kubenswrapper[4941]: I0227 19:42:59.851154 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:42:59 crc kubenswrapper[4941]: I0227 19:42:59.851203 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:42:59 crc kubenswrapper[4941]: I0227 19:42:59.851813 4941 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c85a96bc4775f82cb1875c85131dd1a64f63208a52adc34dd383c749b265921"} pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:42:59 crc kubenswrapper[4941]: I0227 19:42:59.851890 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" containerID="cri-o://9c85a96bc4775f82cb1875c85131dd1a64f63208a52adc34dd383c749b265921" gracePeriod=600 Feb 27 19:43:00 crc kubenswrapper[4941]: I0227 19:43:00.921784 4941 generic.go:334] "Generic (PLEG): container finished" podID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerID="9c85a96bc4775f82cb1875c85131dd1a64f63208a52adc34dd383c749b265921" exitCode=0 Feb 27 19:43:00 crc kubenswrapper[4941]: I0227 19:43:00.921845 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerDied","Data":"9c85a96bc4775f82cb1875c85131dd1a64f63208a52adc34dd383c749b265921"} Feb 27 19:43:00 crc kubenswrapper[4941]: I0227 19:43:00.922301 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"9ea4133087231f38e402e96728a49cb466bd287c5749691bbf385d19714cdee7"} Feb 27 19:43:00 crc kubenswrapper[4941]: I0227 19:43:00.922321 4941 scope.go:117] "RemoveContainer" containerID="ade3a6391f5237aa6ddf2971ddbe07ebbe1b18978ba616907c98f6dd8bac6817" Feb 27 19:43:03 crc kubenswrapper[4941]: E0227 19:43:03.469010 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:43:05 crc kubenswrapper[4941]: E0227 19:43:05.468949 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:43:10 crc kubenswrapper[4941]: E0227 19:43:10.469584 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:43:11 crc kubenswrapper[4941]: E0227 19:43:11.469553 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:43:16 crc kubenswrapper[4941]: E0227 19:43:16.471397 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:43:17 crc kubenswrapper[4941]: E0227 19:43:17.468030 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:43:23 crc kubenswrapper[4941]: E0227 19:43:23.470442 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:43:24 crc kubenswrapper[4941]: I0227 19:43:24.469374 4941 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:43:25 crc kubenswrapper[4941]: E0227 19:43:25.357067 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:43:25 crc kubenswrapper[4941]: E0227 19:43:25.357251 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:43:25 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:43:25 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp7m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-s2ljb_openshift-infra(39ecd957-4632-4dfb-9f87-28a2a83197ad): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:43:25 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:43:25 crc kubenswrapper[4941]: E0227 19:43:25.358461 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:43:29 crc kubenswrapper[4941]: E0227 19:43:29.469565 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:43:31 crc kubenswrapper[4941]: E0227 19:43:31.468758 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:43:37 crc kubenswrapper[4941]: E0227 19:43:37.469415 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:43:38 crc kubenswrapper[4941]: E0227 19:43:38.470697 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:43:43 crc kubenswrapper[4941]: E0227 19:43:43.469208 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:43:44 crc kubenswrapper[4941]: E0227 19:43:44.469048 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:43:49 crc kubenswrapper[4941]: E0227 19:43:49.468206 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:43:54 crc kubenswrapper[4941]: E0227 19:43:54.764967 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:43:54 crc kubenswrapper[4941]: E0227 19:43:54.765628 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp7nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vkqp2_openshift-marketplace(048b2614-045b-4bed-89ef-8554c574f3e6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:43:54 crc kubenswrapper[4941]: E0227 19:43:54.766843 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:43:56 crc kubenswrapper[4941]: E0227 19:43:56.470828 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:43:56 crc kubenswrapper[4941]: E0227 19:43:56.470899 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:44:00 crc kubenswrapper[4941]: I0227 19:44:00.132317 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537024-q6npl"] Feb 27 19:44:00 crc kubenswrapper[4941]: I0227 19:44:00.134111 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-q6npl" Feb 27 19:44:00 crc kubenswrapper[4941]: I0227 19:44:00.142093 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537024-q6npl"] Feb 27 19:44:00 crc kubenswrapper[4941]: I0227 19:44:00.269751 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwghv\" (UniqueName: \"kubernetes.io/projected/76ef0d4b-a801-47ef-98fe-b2b078761207-kube-api-access-kwghv\") pod \"auto-csr-approver-29537024-q6npl\" (UID: \"76ef0d4b-a801-47ef-98fe-b2b078761207\") " pod="openshift-infra/auto-csr-approver-29537024-q6npl" Feb 27 19:44:00 crc kubenswrapper[4941]: I0227 19:44:00.371237 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwghv\" (UniqueName: \"kubernetes.io/projected/76ef0d4b-a801-47ef-98fe-b2b078761207-kube-api-access-kwghv\") pod \"auto-csr-approver-29537024-q6npl\" (UID: \"76ef0d4b-a801-47ef-98fe-b2b078761207\") " pod="openshift-infra/auto-csr-approver-29537024-q6npl" Feb 27 19:44:00 crc kubenswrapper[4941]: I0227 19:44:00.391131 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwghv\" (UniqueName: \"kubernetes.io/projected/76ef0d4b-a801-47ef-98fe-b2b078761207-kube-api-access-kwghv\") pod \"auto-csr-approver-29537024-q6npl\" (UID: \"76ef0d4b-a801-47ef-98fe-b2b078761207\") " pod="openshift-infra/auto-csr-approver-29537024-q6npl" Feb 27 19:44:00 crc kubenswrapper[4941]: I0227 19:44:00.454916 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-q6npl" Feb 27 19:44:00 crc kubenswrapper[4941]: I0227 19:44:00.658292 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537024-q6npl"] Feb 27 19:44:01 crc kubenswrapper[4941]: I0227 19:44:01.275542 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537024-q6npl" event={"ID":"76ef0d4b-a801-47ef-98fe-b2b078761207","Type":"ContainerStarted","Data":"4bd0c491b2c8951dc360a77312d053b8769d4fee74da71ad08e4e6844e6dac68"} Feb 27 19:44:01 crc kubenswrapper[4941]: E0227 19:44:01.509107 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:44:01 crc kubenswrapper[4941]: E0227 19:44:01.509232 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:44:01 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:44:01 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537024-q6npl_openshift-infra(76ef0d4b-a801-47ef-98fe-b2b078761207): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:44:01 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:44:01 crc kubenswrapper[4941]: E0227 19:44:01.510429 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537024-q6npl" podUID="76ef0d4b-a801-47ef-98fe-b2b078761207" Feb 27 19:44:02 crc kubenswrapper[4941]: E0227 19:44:02.284134 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537024-q6npl" podUID="76ef0d4b-a801-47ef-98fe-b2b078761207" Feb 27 19:44:03 crc kubenswrapper[4941]: E0227 19:44:03.470116 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:44:06 crc kubenswrapper[4941]: E0227 19:44:06.468869 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:44:10 crc kubenswrapper[4941]: E0227 19:44:10.070885 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:44:10 crc kubenswrapper[4941]: E0227 19:44:10.071760 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bndzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zs7bf_openshift-marketplace(1737ca02-aded-4254-b433-aac4a9ccad71): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:44:10 crc kubenswrapper[4941]: E0227 19:44:10.073122 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:44:10 crc kubenswrapper[4941]: E0227 19:44:10.469664 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:44:17 crc kubenswrapper[4941]: E0227 19:44:17.470207 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:44:18 crc kubenswrapper[4941]: E0227 19:44:18.368033 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:44:18 crc kubenswrapper[4941]: E0227 19:44:18.368236 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:44:18 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:44:18 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537024-q6npl_openshift-infra(76ef0d4b-a801-47ef-98fe-b2b078761207): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:44:18 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:44:18 crc kubenswrapper[4941]: E0227 19:44:18.369402 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537024-q6npl" podUID="76ef0d4b-a801-47ef-98fe-b2b078761207" Feb 27 19:44:18 crc kubenswrapper[4941]: E0227 19:44:18.469256 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:44:23 crc kubenswrapper[4941]: E0227 19:44:23.469615 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:44:25 crc kubenswrapper[4941]: E0227 19:44:25.469687 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:44:29 crc kubenswrapper[4941]: E0227 19:44:29.468736 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537024-q6npl" podUID="76ef0d4b-a801-47ef-98fe-b2b078761207" Feb 27 19:44:29 crc kubenswrapper[4941]: E0227 19:44:29.468907 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:44:31 crc kubenswrapper[4941]: E0227 19:44:31.471777 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:44:37 crc kubenswrapper[4941]: E0227 19:44:37.468333 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:44:40 crc kubenswrapper[4941]: E0227 19:44:40.468440 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:44:40 crc kubenswrapper[4941]: E0227 19:44:40.468784 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" Feb 27 19:44:44 crc kubenswrapper[4941]: I0227 19:44:44.520868 4941 generic.go:334] "Generic (PLEG): container finished" podID="76ef0d4b-a801-47ef-98fe-b2b078761207" containerID="ccc47308f9e15270871f9b67f98348dcb56ae4139ff6aabd071e53646b179c4f" exitCode=0 Feb 27 19:44:44 crc kubenswrapper[4941]: I0227 19:44:44.520979 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537024-q6npl" event={"ID":"76ef0d4b-a801-47ef-98fe-b2b078761207","Type":"ContainerDied","Data":"ccc47308f9e15270871f9b67f98348dcb56ae4139ff6aabd071e53646b179c4f"} Feb 27 19:44:45 crc kubenswrapper[4941]: I0227 19:44:45.767558 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-q6npl" Feb 27 19:44:45 crc kubenswrapper[4941]: I0227 19:44:45.878944 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwghv\" (UniqueName: \"kubernetes.io/projected/76ef0d4b-a801-47ef-98fe-b2b078761207-kube-api-access-kwghv\") pod \"76ef0d4b-a801-47ef-98fe-b2b078761207\" (UID: \"76ef0d4b-a801-47ef-98fe-b2b078761207\") " Feb 27 19:44:45 crc kubenswrapper[4941]: I0227 19:44:45.885065 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ef0d4b-a801-47ef-98fe-b2b078761207-kube-api-access-kwghv" (OuterVolumeSpecName: "kube-api-access-kwghv") pod "76ef0d4b-a801-47ef-98fe-b2b078761207" (UID: "76ef0d4b-a801-47ef-98fe-b2b078761207"). InnerVolumeSpecName "kube-api-access-kwghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:44:45 crc kubenswrapper[4941]: I0227 19:44:45.980086 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwghv\" (UniqueName: \"kubernetes.io/projected/76ef0d4b-a801-47ef-98fe-b2b078761207-kube-api-access-kwghv\") on node \"crc\" DevicePath \"\"" Feb 27 19:44:46 crc kubenswrapper[4941]: E0227 19:44:46.469102 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:44:46 crc kubenswrapper[4941]: I0227 19:44:46.534304 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537024-q6npl" event={"ID":"76ef0d4b-a801-47ef-98fe-b2b078761207","Type":"ContainerDied","Data":"4bd0c491b2c8951dc360a77312d053b8769d4fee74da71ad08e4e6844e6dac68"} Feb 27 19:44:46 crc kubenswrapper[4941]: I0227 19:44:46.534346 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bd0c491b2c8951dc360a77312d053b8769d4fee74da71ad08e4e6844e6dac68" Feb 27 19:44:46 crc kubenswrapper[4941]: I0227 19:44:46.534351 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-q6npl" Feb 27 19:44:52 crc kubenswrapper[4941]: E0227 19:44:52.473169 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" Feb 27 19:44:54 crc kubenswrapper[4941]: E0227 19:44:54.468205 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:44:56 crc kubenswrapper[4941]: I0227 19:44:56.594050 4941 generic.go:334] "Generic (PLEG): container finished" podID="39ecd957-4632-4dfb-9f87-28a2a83197ad" containerID="dda3a1254eb092cd3d656052b4c78dbb7f7dbdea2538595d582368ece6994da7" exitCode=0 Feb 27 19:44:56 crc kubenswrapper[4941]: I0227 19:44:56.594158 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" event={"ID":"39ecd957-4632-4dfb-9f87-28a2a83197ad","Type":"ContainerDied","Data":"dda3a1254eb092cd3d656052b4c78dbb7f7dbdea2538595d582368ece6994da7"} Feb 27 19:44:57 crc kubenswrapper[4941]: I0227 19:44:57.835876 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" Feb 27 19:44:57 crc kubenswrapper[4941]: I0227 19:44:57.928974 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp7m7\" (UniqueName: \"kubernetes.io/projected/39ecd957-4632-4dfb-9f87-28a2a83197ad-kube-api-access-mp7m7\") pod \"39ecd957-4632-4dfb-9f87-28a2a83197ad\" (UID: \"39ecd957-4632-4dfb-9f87-28a2a83197ad\") " Feb 27 19:44:57 crc kubenswrapper[4941]: I0227 19:44:57.936097 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ecd957-4632-4dfb-9f87-28a2a83197ad-kube-api-access-mp7m7" (OuterVolumeSpecName: "kube-api-access-mp7m7") pod "39ecd957-4632-4dfb-9f87-28a2a83197ad" (UID: "39ecd957-4632-4dfb-9f87-28a2a83197ad"). InnerVolumeSpecName "kube-api-access-mp7m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:44:58 crc kubenswrapper[4941]: I0227 19:44:58.030664 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp7m7\" (UniqueName: \"kubernetes.io/projected/39ecd957-4632-4dfb-9f87-28a2a83197ad-kube-api-access-mp7m7\") on node \"crc\" DevicePath \"\"" Feb 27 19:44:58 crc kubenswrapper[4941]: I0227 19:44:58.606311 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" event={"ID":"39ecd957-4632-4dfb-9f87-28a2a83197ad","Type":"ContainerDied","Data":"693bca2142072fcf8657d4a990bb30322a1e92ae3ff9a139501ed2e8f8a2d4aa"} Feb 27 19:44:58 crc kubenswrapper[4941]: I0227 19:44:58.606353 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="693bca2142072fcf8657d4a990bb30322a1e92ae3ff9a139501ed2e8f8a2d4aa" Feb 27 19:44:58 crc kubenswrapper[4941]: I0227 19:44:58.606411 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537022-s2ljb" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.144378 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp"] Feb 27 19:45:00 crc kubenswrapper[4941]: E0227 19:45:00.144860 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ef0d4b-a801-47ef-98fe-b2b078761207" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.144874 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ef0d4b-a801-47ef-98fe-b2b078761207" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4941]: E0227 19:45:00.144891 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.144898 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.144980 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ef0d4b-a801-47ef-98fe-b2b078761207" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.144995 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.145349 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.150281 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.150854 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.155752 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp"] Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.256467 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66d53559-0fb5-4537-84b3-885f0fd0d217-config-volume\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.256553 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rhv\" (UniqueName: \"kubernetes.io/projected/66d53559-0fb5-4537-84b3-885f0fd0d217-kube-api-access-n9rhv\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.256581 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66d53559-0fb5-4537-84b3-885f0fd0d217-secret-volume\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.357949 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66d53559-0fb5-4537-84b3-885f0fd0d217-config-volume\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.358032 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9rhv\" (UniqueName: \"kubernetes.io/projected/66d53559-0fb5-4537-84b3-885f0fd0d217-kube-api-access-n9rhv\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.358062 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66d53559-0fb5-4537-84b3-885f0fd0d217-secret-volume\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.358928 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66d53559-0fb5-4537-84b3-885f0fd0d217-config-volume\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.362164 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66d53559-0fb5-4537-84b3-885f0fd0d217-secret-volume\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.377206 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9rhv\" (UniqueName: \"kubernetes.io/projected/66d53559-0fb5-4537-84b3-885f0fd0d217-kube-api-access-n9rhv\") pod \"collect-profiles-29537025-cqmfp\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.484275 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:00 crc kubenswrapper[4941]: I0227 19:45:00.907564 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp"] Feb 27 19:45:01 crc kubenswrapper[4941]: E0227 19:45:01.470084 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:45:01 crc kubenswrapper[4941]: I0227 19:45:01.626646 4941 generic.go:334] "Generic (PLEG): container finished" podID="66d53559-0fb5-4537-84b3-885f0fd0d217" containerID="a5e909621730e5691ca7475d7a4781084f488a23f6f9fca6e13a6d647dac28bc" exitCode=0 Feb 27 19:45:01 crc kubenswrapper[4941]: I0227 19:45:01.626738 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" event={"ID":"66d53559-0fb5-4537-84b3-885f0fd0d217","Type":"ContainerDied","Data":"a5e909621730e5691ca7475d7a4781084f488a23f6f9fca6e13a6d647dac28bc"} Feb 27 19:45:01 crc kubenswrapper[4941]: I0227 19:45:01.626987 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" event={"ID":"66d53559-0fb5-4537-84b3-885f0fd0d217","Type":"ContainerStarted","Data":"55ea5cbd567cebb7f7322344787e1d3bf8118cf04a4a26b5827b3e1a4165d2fa"} Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.826421 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.889366 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66d53559-0fb5-4537-84b3-885f0fd0d217-secret-volume\") pod \"66d53559-0fb5-4537-84b3-885f0fd0d217\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.889437 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66d53559-0fb5-4537-84b3-885f0fd0d217-config-volume\") pod \"66d53559-0fb5-4537-84b3-885f0fd0d217\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.889502 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9rhv\" (UniqueName: \"kubernetes.io/projected/66d53559-0fb5-4537-84b3-885f0fd0d217-kube-api-access-n9rhv\") pod \"66d53559-0fb5-4537-84b3-885f0fd0d217\" (UID: \"66d53559-0fb5-4537-84b3-885f0fd0d217\") " Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.890958 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66d53559-0fb5-4537-84b3-885f0fd0d217-config-volume" (OuterVolumeSpecName: "config-volume") pod "66d53559-0fb5-4537-84b3-885f0fd0d217" (UID: "66d53559-0fb5-4537-84b3-885f0fd0d217"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.895459 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d53559-0fb5-4537-84b3-885f0fd0d217-kube-api-access-n9rhv" (OuterVolumeSpecName: "kube-api-access-n9rhv") pod "66d53559-0fb5-4537-84b3-885f0fd0d217" (UID: "66d53559-0fb5-4537-84b3-885f0fd0d217"). InnerVolumeSpecName "kube-api-access-n9rhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.895904 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d53559-0fb5-4537-84b3-885f0fd0d217-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66d53559-0fb5-4537-84b3-885f0fd0d217" (UID: "66d53559-0fb5-4537-84b3-885f0fd0d217"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.990605 4941 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66d53559-0fb5-4537-84b3-885f0fd0d217-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.990645 4941 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66d53559-0fb5-4537-84b3-885f0fd0d217-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:02 crc kubenswrapper[4941]: I0227 19:45:02.990660 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9rhv\" (UniqueName: \"kubernetes.io/projected/66d53559-0fb5-4537-84b3-885f0fd0d217-kube-api-access-n9rhv\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:03 crc kubenswrapper[4941]: I0227 19:45:03.640460 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" event={"ID":"66d53559-0fb5-4537-84b3-885f0fd0d217","Type":"ContainerDied","Data":"55ea5cbd567cebb7f7322344787e1d3bf8118cf04a4a26b5827b3e1a4165d2fa"} Feb 27 19:45:03 crc kubenswrapper[4941]: I0227 19:45:03.640522 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ea5cbd567cebb7f7322344787e1d3bf8118cf04a4a26b5827b3e1a4165d2fa" Feb 27 19:45:03 crc kubenswrapper[4941]: I0227 19:45:03.640624 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-cqmfp" Feb 27 19:45:04 crc kubenswrapper[4941]: I0227 19:45:04.648152 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" event={"ID":"0d98f658-1f8e-41f5-bc4e-2f442243e453","Type":"ContainerStarted","Data":"3f93780e7ac08dfb661b9a007b0b1a57dd8ca57fbb2d074ff41a7128f4073c3b"} Feb 27 19:45:04 crc kubenswrapper[4941]: I0227 19:45:04.665652 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" podStartSLOduration=2.127610461 podStartE2EDuration="7m4.665622933s" podCreationTimestamp="2026-02-27 19:38:00 +0000 UTC" firstStartedPulling="2026-02-27 19:38:01.746914136 +0000 UTC m=+200.008054546" lastFinishedPulling="2026-02-27 19:45:04.284926588 +0000 UTC m=+622.546067018" observedRunningTime="2026-02-27 19:45:04.659627584 +0000 UTC m=+622.920768014" watchObservedRunningTime="2026-02-27 19:45:04.665622933 +0000 UTC m=+622.926763383" Feb 27 19:45:05 crc kubenswrapper[4941]: E0227 19:45:05.468224 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:45:05 crc kubenswrapper[4941]: I0227 19:45:05.653591 4941 generic.go:334] "Generic (PLEG): container finished" podID="0d98f658-1f8e-41f5-bc4e-2f442243e453" containerID="3f93780e7ac08dfb661b9a007b0b1a57dd8ca57fbb2d074ff41a7128f4073c3b" exitCode=0 Feb 27 19:45:05 crc kubenswrapper[4941]: I0227 19:45:05.653679 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" event={"ID":"0d98f658-1f8e-41f5-bc4e-2f442243e453","Type":"ContainerDied","Data":"3f93780e7ac08dfb661b9a007b0b1a57dd8ca57fbb2d074ff41a7128f4073c3b"} Feb 27 19:45:06 crc kubenswrapper[4941]: I0227 19:45:06.836851 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" Feb 27 19:45:06 crc kubenswrapper[4941]: I0227 19:45:06.941534 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psgk4\" (UniqueName: \"kubernetes.io/projected/0d98f658-1f8e-41f5-bc4e-2f442243e453-kube-api-access-psgk4\") pod \"0d98f658-1f8e-41f5-bc4e-2f442243e453\" (UID: \"0d98f658-1f8e-41f5-bc4e-2f442243e453\") " Feb 27 19:45:06 crc kubenswrapper[4941]: I0227 19:45:06.946742 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d98f658-1f8e-41f5-bc4e-2f442243e453-kube-api-access-psgk4" (OuterVolumeSpecName: "kube-api-access-psgk4") pod "0d98f658-1f8e-41f5-bc4e-2f442243e453" (UID: "0d98f658-1f8e-41f5-bc4e-2f442243e453"). InnerVolumeSpecName "kube-api-access-psgk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:45:07 crc kubenswrapper[4941]: I0227 19:45:07.042600 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psgk4\" (UniqueName: \"kubernetes.io/projected/0d98f658-1f8e-41f5-bc4e-2f442243e453-kube-api-access-psgk4\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:07 crc kubenswrapper[4941]: I0227 19:45:07.666158 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" event={"ID":"0d98f658-1f8e-41f5-bc4e-2f442243e453","Type":"ContainerDied","Data":"40de8ce30ce2580dd76aba2b4ec4089f1e4c4007fe188c42b726eda5c42b1559"} Feb 27 19:45:07 crc kubenswrapper[4941]: I0227 19:45:07.666202 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40de8ce30ce2580dd76aba2b4ec4089f1e4c4007fe188c42b726eda5c42b1559" Feb 27 19:45:07 crc kubenswrapper[4941]: I0227 19:45:07.666298 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-k4vjk" Feb 27 19:45:07 crc kubenswrapper[4941]: I0227 19:45:07.722812 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537018-k4vjk"] Feb 27 19:45:07 crc kubenswrapper[4941]: I0227 19:45:07.727032 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537018-k4vjk"] Feb 27 19:45:08 crc kubenswrapper[4941]: I0227 19:45:08.476048 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" path="/var/lib/kubelet/pods/0d98f658-1f8e-41f5-bc4e-2f442243e453/volumes" Feb 27 19:45:15 crc kubenswrapper[4941]: E0227 19:45:15.469779 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:45:18 crc kubenswrapper[4941]: E0227 19:45:18.474167 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:45:29 crc kubenswrapper[4941]: I0227 19:45:29.851032 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:45:29 crc kubenswrapper[4941]: I0227 19:45:29.851819 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:45:30 crc kubenswrapper[4941]: E0227 19:45:30.473269 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:45:30 crc kubenswrapper[4941]: E0227 19:45:30.473607 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:45:42 crc kubenswrapper[4941]: E0227 19:45:42.471729 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:45:45 crc kubenswrapper[4941]: E0227 19:45:45.470391 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:45:54 crc kubenswrapper[4941]: E0227 19:45:54.468654 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:45:57 crc kubenswrapper[4941]: E0227 19:45:57.468325 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:45:59 crc kubenswrapper[4941]: I0227 19:45:59.851067 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:45:59 crc kubenswrapper[4941]: I0227 19:45:59.851428 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.141850 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537026-whvl4"] Feb 27 19:46:00 crc kubenswrapper[4941]: E0227 19:46:00.142061 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" containerName="oc" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.142075 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" containerName="oc" Feb 27 19:46:00 crc kubenswrapper[4941]: E0227 19:46:00.142089 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d53559-0fb5-4537-84b3-885f0fd0d217" containerName="collect-profiles" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.142095 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d53559-0fb5-4537-84b3-885f0fd0d217" containerName="collect-profiles" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.142197 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d53559-0fb5-4537-84b3-885f0fd0d217" containerName="collect-profiles" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.142207 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d98f658-1f8e-41f5-bc4e-2f442243e453" containerName="oc" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.142580 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537026-whvl4" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.144391 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-dmspt" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.144933 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.149513 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.158715 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537026-whvl4"] Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.301404 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgv6\" (UniqueName: \"kubernetes.io/projected/30ae0dbf-9bfb-4038-98a5-1fc39572c5b4-kube-api-access-pvgv6\") pod \"auto-csr-approver-29537026-whvl4\" (UID: \"30ae0dbf-9bfb-4038-98a5-1fc39572c5b4\") " pod="openshift-infra/auto-csr-approver-29537026-whvl4" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.403562 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvgv6\" (UniqueName: \"kubernetes.io/projected/30ae0dbf-9bfb-4038-98a5-1fc39572c5b4-kube-api-access-pvgv6\") pod \"auto-csr-approver-29537026-whvl4\" (UID: \"30ae0dbf-9bfb-4038-98a5-1fc39572c5b4\") " pod="openshift-infra/auto-csr-approver-29537026-whvl4" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.428564 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvgv6\" (UniqueName: \"kubernetes.io/projected/30ae0dbf-9bfb-4038-98a5-1fc39572c5b4-kube-api-access-pvgv6\") pod \"auto-csr-approver-29537026-whvl4\" (UID: \"30ae0dbf-9bfb-4038-98a5-1fc39572c5b4\") " pod="openshift-infra/auto-csr-approver-29537026-whvl4" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.470752 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537026-whvl4" Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.879182 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537026-whvl4"] Feb 27 19:46:00 crc kubenswrapper[4941]: I0227 19:46:00.967917 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537026-whvl4" event={"ID":"30ae0dbf-9bfb-4038-98a5-1fc39572c5b4","Type":"ContainerStarted","Data":"fda5695d7d1d7549d2013fbe9dc032fdcb9a8c5560f292079dc0c7de1b13db20"} Feb 27 19:46:01 crc kubenswrapper[4941]: E0227 19:46:01.985109 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:46:01 crc kubenswrapper[4941]: E0227 19:46:01.985240 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:46:01 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:46:01 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvgv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537026-whvl4_openshift-infra(30ae0dbf-9bfb-4038-98a5-1fc39572c5b4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:46:01 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:46:01 crc kubenswrapper[4941]: E0227 19:46:01.986438 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:46:02 crc kubenswrapper[4941]: E0227 19:46:02.983048 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:46:05 crc kubenswrapper[4941]: E0227 19:46:05.468031 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:46:11 crc kubenswrapper[4941]: E0227 19:46:11.469888 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:46:18 crc kubenswrapper[4941]: E0227 19:46:18.444192 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:46:18 crc kubenswrapper[4941]: E0227 19:46:18.444870 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:46:18 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:46:18 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvgv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537026-whvl4_openshift-infra(30ae0dbf-9bfb-4038-98a5-1fc39572c5b4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:46:18 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:46:18 crc kubenswrapper[4941]: E0227 19:46:18.446368 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:46:20 crc kubenswrapper[4941]: E0227 19:46:20.469010 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:46:26 crc kubenswrapper[4941]: E0227 19:46:26.469020 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:46:29 crc kubenswrapper[4941]: I0227 19:46:29.851368 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:46:29 crc kubenswrapper[4941]: I0227 19:46:29.852589 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:46:29 crc kubenswrapper[4941]: I0227 19:46:29.852666 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:46:29 crc kubenswrapper[4941]: I0227 19:46:29.853262 4941 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ea4133087231f38e402e96728a49cb466bd287c5749691bbf385d19714cdee7"} pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:46:29 crc kubenswrapper[4941]: I0227 19:46:29.853317 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" containerID="cri-o://9ea4133087231f38e402e96728a49cb466bd287c5749691bbf385d19714cdee7" gracePeriod=600 Feb 27 19:46:29 crc kubenswrapper[4941]: E0227 19:46:29.897347 4941 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c0b99f5_8424_4e74_a332_f6dff828c48a.slice/crio-9ea4133087231f38e402e96728a49cb466bd287c5749691bbf385d19714cdee7.scope\": RecentStats: unable to find data in memory cache]" Feb 27 19:46:30 crc kubenswrapper[4941]: I0227 19:46:30.127532 4941 generic.go:334] "Generic (PLEG): container finished" podID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerID="9ea4133087231f38e402e96728a49cb466bd287c5749691bbf385d19714cdee7" exitCode=0 Feb 27 19:46:30 crc kubenswrapper[4941]: I0227 19:46:30.127617 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerDied","Data":"9ea4133087231f38e402e96728a49cb466bd287c5749691bbf385d19714cdee7"} Feb 27 19:46:30 crc kubenswrapper[4941]: I0227 19:46:30.127859 4941 scope.go:117] "RemoveContainer" containerID="9c85a96bc4775f82cb1875c85131dd1a64f63208a52adc34dd383c749b265921" Feb 27 19:46:31 crc kubenswrapper[4941]: I0227 19:46:31.135701 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"7980c890302f1228f0f62409ae6b09a7950440b361f4647d2f4be6a242445589"} Feb 27 19:46:33 crc kubenswrapper[4941]: E0227 19:46:33.468684 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:46:34 crc kubenswrapper[4941]: E0227 19:46:34.469299 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:46:38 crc kubenswrapper[4941]: E0227 19:46:38.470149 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:46:46 crc kubenswrapper[4941]: E0227 19:46:46.282181 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:46:46 crc kubenswrapper[4941]: E0227 19:46:46.282905 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp7nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vkqp2_openshift-marketplace(048b2614-045b-4bed-89ef-8554c574f3e6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:46:46 crc kubenswrapper[4941]: E0227 19:46:46.284167 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:46:46 crc kubenswrapper[4941]: E0227 19:46:46.400857 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:46:46 crc kubenswrapper[4941]: E0227 19:46:46.401038 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:46:46 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:46:46 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvgv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537026-whvl4_openshift-infra(30ae0dbf-9bfb-4038-98a5-1fc39572c5b4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:46:46 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:46:46 crc kubenswrapper[4941]: E0227 19:46:46.402871 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:46:49 crc kubenswrapper[4941]: E0227 19:46:49.469350 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:46:57 crc kubenswrapper[4941]: E0227 19:46:57.471043 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:46:59 crc kubenswrapper[4941]: E0227 19:46:59.468853 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:47:02 crc kubenswrapper[4941]: E0227 19:47:02.847021 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:47:02 crc kubenswrapper[4941]: E0227 19:47:02.847436 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bndzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zs7bf_openshift-marketplace(1737ca02-aded-4254-b433-aac4a9ccad71): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:47:02 crc kubenswrapper[4941]: E0227 19:47:02.848773 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:47:11 crc kubenswrapper[4941]: E0227 19:47:11.469607 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:47:13 crc kubenswrapper[4941]: E0227 19:47:13.468103 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:47:14 crc kubenswrapper[4941]: E0227 19:47:14.468679 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:47:24 crc kubenswrapper[4941]: E0227 19:47:24.469119 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:47:25 crc kubenswrapper[4941]: E0227 19:47:25.468538 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:47:28 crc kubenswrapper[4941]: E0227 19:47:28.511688 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:47:28 crc kubenswrapper[4941]: E0227 19:47:28.511842 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:47:28 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:47:28 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvgv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537026-whvl4_openshift-infra(30ae0dbf-9bfb-4038-98a5-1fc39572c5b4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:47:28 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:47:28 crc kubenswrapper[4941]: E0227 19:47:28.513032 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:47:38 crc kubenswrapper[4941]: E0227 19:47:38.469328 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:47:39 crc kubenswrapper[4941]: E0227 19:47:39.468815 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:47:41 crc kubenswrapper[4941]: E0227 19:47:41.469549 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:47:49 crc kubenswrapper[4941]: E0227 19:47:49.471459 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:47:50 crc kubenswrapper[4941]: E0227 19:47:50.468357 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:47:52 crc kubenswrapper[4941]: E0227 19:47:52.479675 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.416770 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v74b7"] Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.418072 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovn-controller" containerID="cri-o://46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea" gracePeriod=30 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.418115 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="nbdb" containerID="cri-o://1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953" gracePeriod=30 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.418145 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180" gracePeriod=30 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.418226 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="northd" containerID="cri-o://bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501" gracePeriod=30 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.418245 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovn-acl-logging" containerID="cri-o://c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615" gracePeriod=30 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.418323 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kube-rbac-proxy-node" containerID="cri-o://69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5" gracePeriod=30 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.418285 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="sbdb" containerID="cri-o://a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1" gracePeriod=30 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.472148 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" containerID="cri-o://32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a" gracePeriod=30 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.647854 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/2.log" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.648329 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/1.log" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.648375 4941 generic.go:334] "Generic (PLEG): container finished" podID="16d71936-7f0d-4add-a17b-400840d5fce2" containerID="47b17cd33fafc9994c48d1c40ce7cd487378811061d35a5d74b8b3e7104328dc" exitCode=2 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.648439 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lt4bk" event={"ID":"16d71936-7f0d-4add-a17b-400840d5fce2","Type":"ContainerDied","Data":"47b17cd33fafc9994c48d1c40ce7cd487378811061d35a5d74b8b3e7104328dc"} Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.648489 4941 scope.go:117] "RemoveContainer" containerID="98c4f4803285881fb68c23cba98abe65750362c9c35d77450c0527fe4b849cbf" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.649026 4941 scope.go:117] "RemoveContainer" containerID="47b17cd33fafc9994c48d1c40ce7cd487378811061d35a5d74b8b3e7104328dc" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.651375 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovnkube-controller/3.log" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.662218 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovn-acl-logging/0.log" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.663675 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovn-controller/0.log" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668004 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a" exitCode=0 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668037 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953" exitCode=0 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668046 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180" exitCode=0 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668054 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5" exitCode=0 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668061 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615" exitCode=143 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668068 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea" exitCode=143 Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668104 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a"} Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668168 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953"} Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668185 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180"} Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668201 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5"} Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668216 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615"} Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.668228 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea"} Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.770273 4941 scope.go:117] "RemoveContainer" containerID="8bfb2e0dd55e39d583e0c388a1aff7f6cb6b98407778ea1fc6350d5dd66df261" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.794323 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovn-acl-logging/0.log" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.795542 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovn-controller/0.log" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.795972 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846201 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9242g"] Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846645 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovn-acl-logging" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846659 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovn-acl-logging" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846668 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kube-rbac-proxy-node" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846673 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kube-rbac-proxy-node" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846685 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovn-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846690 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovn-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846696 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846702 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846709 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846716 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846723 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="nbdb" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846729 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="nbdb" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846739 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kubecfg-setup" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846745 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kubecfg-setup" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846753 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846758 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846766 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="northd" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846771 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="northd" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846779 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846785 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846792 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="sbdb" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846797 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="sbdb" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.846805 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846811 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846893 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="nbdb" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846905 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846913 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846921 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846928 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846936 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovn-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846943 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovn-acl-logging" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846951 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846958 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="sbdb" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846964 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="kube-rbac-proxy-node" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.846974 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="northd" Feb 27 19:47:57 crc kubenswrapper[4941]: E0227 19:47:57.847104 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.847112 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.847227 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerName="ovnkube-controller" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.848832 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934526 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-config\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934594 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-netns\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934622 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-env-overrides\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934647 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-ovn-kubernetes\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934680 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-systemd-units\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934666 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934739 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-var-lib-openvswitch\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934740 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934765 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-slash\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934779 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934824 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934875 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-systemd\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934905 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-slash" (OuterVolumeSpecName: "host-slash") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934940 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934985 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.935031 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-log-socket" (OuterVolumeSpecName: "log-socket") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.934904 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-log-socket\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.935824 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-bin\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.935858 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovn-node-metrics-cert\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.935892 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-script-lib\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.935889 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.935916 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-node-log\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.935942 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-node-log" (OuterVolumeSpecName: "node-log") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936028 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936096 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-netd\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936148 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-openvswitch\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936170 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936215 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-etc-openvswitch\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936225 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936279 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936274 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-kubelet\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936351 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-ovn\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936287 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936237 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936391 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936322 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936643 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rxgc\" (UniqueName: \"kubernetes.io/projected/bb476894-9c4f-487a-bfa6-5babb5243c0d-kube-api-access-5rxgc\") pod \"bb476894-9c4f-487a-bfa6-5babb5243c0d\" (UID: \"bb476894-9c4f-487a-bfa6-5babb5243c0d\") " Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936889 4941 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936907 4941 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936919 4941 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936931 4941 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936943 4941 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936952 4941 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936963 4941 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-slash\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936972 4941 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-log-socket\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936981 4941 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.936990 4941 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.937000 4941 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-node-log\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.937010 4941 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.937020 4941 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.937030 4941 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.937039 4941 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.937048 4941 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.937058 4941 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.941687 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.942170 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb476894-9c4f-487a-bfa6-5babb5243c0d-kube-api-access-5rxgc" (OuterVolumeSpecName: "kube-api-access-5rxgc") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "kube-api-access-5rxgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:47:57 crc kubenswrapper[4941]: I0227 19:47:57.957923 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bb476894-9c4f-487a-bfa6-5babb5243c0d" (UID: "bb476894-9c4f-487a-bfa6-5babb5243c0d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038412 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-run-netns\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038506 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-kubelet\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038542 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-systemd-units\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038582 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-node-log\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038616 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-env-overrides\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038653 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-cni-bin\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038685 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-slash\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038723 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-log-socket\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038936 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-run-ovn-kubernetes\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038969 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-var-lib-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.038996 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-etc-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039032 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-ovn\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039069 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl2mj\" (UniqueName: \"kubernetes.io/projected/03de2f55-a002-4261-b728-8a101525bce1-kube-api-access-sl2mj\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039102 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-ovnkube-script-lib\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039130 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-systemd\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039161 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039276 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039331 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-ovnkube-config\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039364 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03de2f55-a002-4261-b728-8a101525bce1-ovn-node-metrics-cert\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039459 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-cni-netd\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039620 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rxgc\" (UniqueName: \"kubernetes.io/projected/bb476894-9c4f-487a-bfa6-5babb5243c0d-kube-api-access-5rxgc\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039641 4941 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb476894-9c4f-487a-bfa6-5babb5243c0d-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.039661 4941 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb476894-9c4f-487a-bfa6-5babb5243c0d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140790 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-run-ovn-kubernetes\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140834 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-etc-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140855 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-var-lib-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140876 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-ovn\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140898 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl2mj\" (UniqueName: \"kubernetes.io/projected/03de2f55-a002-4261-b728-8a101525bce1-kube-api-access-sl2mj\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140915 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-ovnkube-script-lib\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140931 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-systemd\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140947 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140968 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.140986 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-ovnkube-config\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141002 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03de2f55-a002-4261-b728-8a101525bce1-ovn-node-metrics-cert\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141018 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-cni-netd\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141037 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-run-netns\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141039 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-var-lib-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141076 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-kubelet\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141081 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-systemd\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141052 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-kubelet\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141111 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-etc-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141162 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141185 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-systemd-units\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141147 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-cni-netd\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141133 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-openvswitch\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141142 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-run-netns\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141186 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-run-ovn\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141382 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-systemd-units\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141437 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-node-log\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141538 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-env-overrides\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141558 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-node-log\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141658 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-cni-bin\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141699 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-slash\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141741 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-log-socket\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141833 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-log-socket\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141880 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-cni-bin\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141921 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-ovnkube-script-lib\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.141922 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-slash\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.142167 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de2f55-a002-4261-b728-8a101525bce1-host-run-ovn-kubernetes\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.142279 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-ovnkube-config\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.142510 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03de2f55-a002-4261-b728-8a101525bce1-env-overrides\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.154034 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03de2f55-a002-4261-b728-8a101525bce1-ovn-node-metrics-cert\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.164345 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl2mj\" (UniqueName: \"kubernetes.io/projected/03de2f55-a002-4261-b728-8a101525bce1-kube-api-access-sl2mj\") pod \"ovnkube-node-9242g\" (UID: \"03de2f55-a002-4261-b728-8a101525bce1\") " pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.462677 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:47:58 crc kubenswrapper[4941]: W0227 19:47:58.493518 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03de2f55_a002_4261_b728_8a101525bce1.slice/crio-d5af73b3aa96292117dbdbedd1a0975ee6f7867d2e039c30826582e2da6caa3d WatchSource:0}: Error finding container d5af73b3aa96292117dbdbedd1a0975ee6f7867d2e039c30826582e2da6caa3d: Status 404 returned error can't find the container with id d5af73b3aa96292117dbdbedd1a0975ee6f7867d2e039c30826582e2da6caa3d Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.678301 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovn-acl-logging/0.log" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.679421 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v74b7_bb476894-9c4f-487a-bfa6-5babb5243c0d/ovn-controller/0.log" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.679921 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1" exitCode=0 Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.679956 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb476894-9c4f-487a-bfa6-5babb5243c0d" containerID="bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501" exitCode=0 Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.680021 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1"} Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.680115 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501"} Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.680149 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" event={"ID":"bb476894-9c4f-487a-bfa6-5babb5243c0d","Type":"ContainerDied","Data":"38de59a0fe679fe6441797a6ff231bc7c5d1b85589c3c666cf0c687f5d2e9dba"} Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.680180 4941 scope.go:117] "RemoveContainer" containerID="32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.680374 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v74b7" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.681530 4941 generic.go:334] "Generic (PLEG): container finished" podID="03de2f55-a002-4261-b728-8a101525bce1" containerID="2f32986b6b06ed39c0a8252eae44e8bc9e579c828b33d0bcce3b435200c3f10c" exitCode=0 Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.681584 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerDied","Data":"2f32986b6b06ed39c0a8252eae44e8bc9e579c828b33d0bcce3b435200c3f10c"} Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.681610 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"d5af73b3aa96292117dbdbedd1a0975ee6f7867d2e039c30826582e2da6caa3d"} Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.686246 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lt4bk_16d71936-7f0d-4add-a17b-400840d5fce2/kube-multus/2.log" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.686332 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lt4bk" event={"ID":"16d71936-7f0d-4add-a17b-400840d5fce2","Type":"ContainerStarted","Data":"2fe508923f0d89d1eaabb09a0bf4cf3b911aafe93ba9992fdc052304ebca68c2"} Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.708159 4941 scope.go:117] "RemoveContainer" containerID="a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.740571 4941 scope.go:117] "RemoveContainer" containerID="1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.770082 4941 scope.go:117] "RemoveContainer" containerID="bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.794210 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v74b7"] Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.801696 4941 scope.go:117] "RemoveContainer" containerID="6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.802930 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v74b7"] Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.819793 4941 scope.go:117] "RemoveContainer" containerID="69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.832536 4941 scope.go:117] "RemoveContainer" containerID="c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.853141 4941 scope.go:117] "RemoveContainer" containerID="46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.877434 4941 scope.go:117] "RemoveContainer" containerID="745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.890178 4941 scope.go:117] "RemoveContainer" containerID="32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.890618 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a\": container with ID starting with 32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a not found: ID does not exist" containerID="32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.890680 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a"} err="failed to get container status \"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a\": rpc error: code = NotFound desc = could not find container \"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a\": container with ID starting with 32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.890718 4941 scope.go:117] "RemoveContainer" containerID="a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.891018 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\": container with ID starting with a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1 not found: ID does not exist" containerID="a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.891063 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1"} err="failed to get container status \"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\": rpc error: code = NotFound desc = could not find container \"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\": container with ID starting with a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.891086 4941 scope.go:117] "RemoveContainer" containerID="1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.891457 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\": container with ID starting with 1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953 not found: ID does not exist" containerID="1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.891504 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953"} err="failed to get container status \"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\": rpc error: code = NotFound desc = could not find container \"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\": container with ID starting with 1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.891520 4941 scope.go:117] "RemoveContainer" containerID="bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.891774 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\": container with ID starting with bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501 not found: ID does not exist" containerID="bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.891817 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501"} err="failed to get container status \"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\": rpc error: code = NotFound desc = could not find container \"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\": container with ID starting with bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.891830 4941 scope.go:117] "RemoveContainer" containerID="6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.892140 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\": container with ID starting with 6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180 not found: ID does not exist" containerID="6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.892186 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180"} err="failed to get container status \"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\": rpc error: code = NotFound desc = could not find container \"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\": container with ID starting with 6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.892219 4941 scope.go:117] "RemoveContainer" containerID="69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.892617 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\": container with ID starting with 69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5 not found: ID does not exist" containerID="69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.892685 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5"} err="failed to get container status \"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\": rpc error: code = NotFound desc = could not find container \"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\": container with ID starting with 69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.892737 4941 scope.go:117] "RemoveContainer" containerID="c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.893224 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\": container with ID starting with c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615 not found: ID does not exist" containerID="c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.893249 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615"} err="failed to get container status \"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\": rpc error: code = NotFound desc = could not find container \"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\": container with ID starting with c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.893264 4941 scope.go:117] "RemoveContainer" containerID="46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.893526 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\": container with ID starting with 46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea not found: ID does not exist" containerID="46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.893562 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea"} err="failed to get container status \"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\": rpc error: code = NotFound desc = could not find container \"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\": container with ID starting with 46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.893585 4941 scope.go:117] "RemoveContainer" containerID="745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3" Feb 27 19:47:58 crc kubenswrapper[4941]: E0227 19:47:58.893846 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\": container with ID starting with 745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3 not found: ID does not exist" containerID="745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.893867 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3"} err="failed to get container status \"745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\": rpc error: code = NotFound desc = could not find container \"745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\": container with ID starting with 745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.893881 4941 scope.go:117] "RemoveContainer" containerID="32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.894145 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a"} err="failed to get container status \"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a\": rpc error: code = NotFound desc = could not find container \"32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a\": container with ID starting with 32735a97b295a3ddba9101de7f330e3e0483e05d0227f68a6fa51ad1ff0f034a not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.894168 4941 scope.go:117] "RemoveContainer" containerID="a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.894441 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1"} err="failed to get container status \"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\": rpc error: code = NotFound desc = could not find container \"a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1\": container with ID starting with a4c379ee49adf21c8bb23fed93edb2ba316ad47412a0db30b07f787d7fb3c7c1 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.894477 4941 scope.go:117] "RemoveContainer" containerID="1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.894746 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953"} err="failed to get container status \"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\": rpc error: code = NotFound desc = could not find container \"1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953\": container with ID starting with 1bfaecd66c448c5b0383bbf0476365dc98cbb5754bc8bb93712690080644c953 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.894783 4941 scope.go:117] "RemoveContainer" containerID="bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.895019 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501"} err="failed to get container status \"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\": rpc error: code = NotFound desc = could not find container \"bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501\": container with ID starting with bc32daefb3318eebe68cbd88a9a40e76719106d6a30aad5bd8734caf95b96501 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.895036 4941 scope.go:117] "RemoveContainer" containerID="6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.897576 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180"} err="failed to get container status \"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\": rpc error: code = NotFound desc = could not find container \"6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180\": container with ID starting with 6b7d80fd8e08521849e144fbf33b97f97f2986e136a683de2e9119c9193e0180 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.897607 4941 scope.go:117] "RemoveContainer" containerID="69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.897994 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5"} err="failed to get container status \"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\": rpc error: code = NotFound desc = could not find container \"69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5\": container with ID starting with 69dd1f0f70eab1661b78dc649240cabac7531a8e4abef4285df69cb3c91443b5 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.898066 4941 scope.go:117] "RemoveContainer" containerID="c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.898534 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615"} err="failed to get container status \"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\": rpc error: code = NotFound desc = could not find container \"c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615\": container with ID starting with c179ccae9d90514b8c9a8d9621e2fddb17706e3a71d7703838fc6dcdf898e615 not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.898569 4941 scope.go:117] "RemoveContainer" containerID="46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.898846 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea"} err="failed to get container status \"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\": rpc error: code = NotFound desc = could not find container \"46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea\": container with ID starting with 46a5c9eb08276a177630973211e9cf598e9594d71b150ad4ba16f4068934c0ea not found: ID does not exist" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.898885 4941 scope.go:117] "RemoveContainer" containerID="745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3" Feb 27 19:47:58 crc kubenswrapper[4941]: I0227 19:47:58.899146 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3"} err="failed to get container status \"745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\": rpc error: code = NotFound desc = could not find container \"745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3\": container with ID starting with 745fa28704057894e6e5e4f7e0ccd6430432f40bd602ae39a79ec43346732fc3 not found: ID does not exist" Feb 27 19:47:59 crc kubenswrapper[4941]: I0227 19:47:59.697838 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"c6454540c498659422657961a32afdc851fa0d665fecd037ac09054128d60dfe"} Feb 27 19:47:59 crc kubenswrapper[4941]: I0227 19:47:59.698302 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"3869645d4c0ba0db6c27968293822ddb472309dca9b12599a174681817ed08d4"} Feb 27 19:47:59 crc kubenswrapper[4941]: I0227 19:47:59.698337 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"244b72e7482dffc3adb38bc27108e4b1866387ed0b7efd19069c4f9d0a377682"} Feb 27 19:47:59 crc kubenswrapper[4941]: I0227 19:47:59.698358 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"85f6a992bf9cd8f4a68474cb6897ffb37a420a943d63042b7d7ec78c5b190027"} Feb 27 19:47:59 crc kubenswrapper[4941]: I0227 19:47:59.698376 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"f4a4ee612fd8567c8a834984bcbdc28f9ac57ebb18cd95821c149969ea5d0089"} Feb 27 19:47:59 crc kubenswrapper[4941]: I0227 19:47:59.698397 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"b2bcbf4d05b1e05623c56fe578ab158b20104aa58e62b431aec3c396ca4cb71f"} Feb 27 19:48:00 crc kubenswrapper[4941]: I0227 19:48:00.140902 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537028-bfg7w"] Feb 27 19:48:00 crc kubenswrapper[4941]: I0227 19:48:00.142038 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:00 crc kubenswrapper[4941]: I0227 19:48:00.271228 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm949\" (UniqueName: \"kubernetes.io/projected/4a332721-1e6a-4f6b-a4ff-a0943263f545-kube-api-access-bm949\") pod \"auto-csr-approver-29537028-bfg7w\" (UID: \"4a332721-1e6a-4f6b-a4ff-a0943263f545\") " pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:00 crc kubenswrapper[4941]: I0227 19:48:00.372389 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm949\" (UniqueName: \"kubernetes.io/projected/4a332721-1e6a-4f6b-a4ff-a0943263f545-kube-api-access-bm949\") pod \"auto-csr-approver-29537028-bfg7w\" (UID: \"4a332721-1e6a-4f6b-a4ff-a0943263f545\") " pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:00 crc kubenswrapper[4941]: I0227 19:48:00.390124 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm949\" (UniqueName: \"kubernetes.io/projected/4a332721-1e6a-4f6b-a4ff-a0943263f545-kube-api-access-bm949\") pod \"auto-csr-approver-29537028-bfg7w\" (UID: \"4a332721-1e6a-4f6b-a4ff-a0943263f545\") " pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:00 crc kubenswrapper[4941]: I0227 19:48:00.473114 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb476894-9c4f-487a-bfa6-5babb5243c0d" path="/var/lib/kubelet/pods/bb476894-9c4f-487a-bfa6-5babb5243c0d/volumes" Feb 27 19:48:00 crc kubenswrapper[4941]: I0227 19:48:00.473337 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:00 crc kubenswrapper[4941]: E0227 19:48:00.514683 4941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29537028-bfg7w_openshift-infra_4a332721-1e6a-4f6b-a4ff-a0943263f545_0(02e97cfac3372a206044a11a8e70fcfadab5331ba19cf5fd4b96fc67b9fbd5ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:48:00 crc kubenswrapper[4941]: E0227 19:48:00.515075 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29537028-bfg7w_openshift-infra_4a332721-1e6a-4f6b-a4ff-a0943263f545_0(02e97cfac3372a206044a11a8e70fcfadab5331ba19cf5fd4b96fc67b9fbd5ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:00 crc kubenswrapper[4941]: E0227 19:48:00.515102 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29537028-bfg7w_openshift-infra_4a332721-1e6a-4f6b-a4ff-a0943263f545_0(02e97cfac3372a206044a11a8e70fcfadab5331ba19cf5fd4b96fc67b9fbd5ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:00 crc kubenswrapper[4941]: E0227 19:48:00.515151 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29537028-bfg7w_openshift-infra(4a332721-1e6a-4f6b-a4ff-a0943263f545)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29537028-bfg7w_openshift-infra(4a332721-1e6a-4f6b-a4ff-a0943263f545)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29537028-bfg7w_openshift-infra_4a332721-1e6a-4f6b-a4ff-a0943263f545_0(02e97cfac3372a206044a11a8e70fcfadab5331ba19cf5fd4b96fc67b9fbd5ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:48:01 crc kubenswrapper[4941]: I0227 19:48:01.712756 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"99ea0e9e446a7591af78ee345804b5a41821e01f9432354a8306e464e176077e"} Feb 27 19:48:04 crc kubenswrapper[4941]: E0227 19:48:04.470208 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:48:04 crc kubenswrapper[4941]: E0227 19:48:04.471259 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.730267 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" event={"ID":"03de2f55-a002-4261-b728-8a101525bce1","Type":"ContainerStarted","Data":"d6527e8261ae9a5e4a50d384fa242702138ea52a65e88baaf787cebb685a4fd6"} Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.730667 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.730723 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.772317 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" podStartSLOduration=7.772300796 podStartE2EDuration="7.772300796s" podCreationTimestamp="2026-02-27 19:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:48:04.769210208 +0000 UTC m=+803.030350638" watchObservedRunningTime="2026-02-27 19:48:04.772300796 +0000 UTC m=+803.033441216" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.777812 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.970054 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2ql7n"] Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.970848 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.973615 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.973971 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.974000 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 27 19:48:04 crc kubenswrapper[4941]: I0227 19:48:04.973450 4941 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-zfpgz" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.130983 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a01e4266-4013-425b-b0d0-fccfaae00f13-node-mnt\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.131058 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsmlx\" (UniqueName: \"kubernetes.io/projected/a01e4266-4013-425b-b0d0-fccfaae00f13-kube-api-access-nsmlx\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.131116 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a01e4266-4013-425b-b0d0-fccfaae00f13-crc-storage\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.180152 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2ql7n"] Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.189392 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537028-bfg7w"] Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.189553 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.190012 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.217226 4941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29537028-bfg7w_openshift-infra_4a332721-1e6a-4f6b-a4ff-a0943263f545_0(6d901fd9e4adeee43d1aac00d1cc67285a959c9136f37ecb61fa5deaeab67f37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.217300 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29537028-bfg7w_openshift-infra_4a332721-1e6a-4f6b-a4ff-a0943263f545_0(6d901fd9e4adeee43d1aac00d1cc67285a959c9136f37ecb61fa5deaeab67f37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.217325 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29537028-bfg7w_openshift-infra_4a332721-1e6a-4f6b-a4ff-a0943263f545_0(6d901fd9e4adeee43d1aac00d1cc67285a959c9136f37ecb61fa5deaeab67f37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.217397 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29537028-bfg7w_openshift-infra(4a332721-1e6a-4f6b-a4ff-a0943263f545)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29537028-bfg7w_openshift-infra(4a332721-1e6a-4f6b-a4ff-a0943263f545)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29537028-bfg7w_openshift-infra_4a332721-1e6a-4f6b-a4ff-a0943263f545_0(6d901fd9e4adeee43d1aac00d1cc67285a959c9136f37ecb61fa5deaeab67f37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.231741 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a01e4266-4013-425b-b0d0-fccfaae00f13-crc-storage\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.231790 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a01e4266-4013-425b-b0d0-fccfaae00f13-node-mnt\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.231871 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsmlx\" (UniqueName: \"kubernetes.io/projected/a01e4266-4013-425b-b0d0-fccfaae00f13-kube-api-access-nsmlx\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.232224 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a01e4266-4013-425b-b0d0-fccfaae00f13-node-mnt\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.232496 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a01e4266-4013-425b-b0d0-fccfaae00f13-crc-storage\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.253772 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsmlx\" (UniqueName: \"kubernetes.io/projected/a01e4266-4013-425b-b0d0-fccfaae00f13-kube-api-access-nsmlx\") pod \"crc-storage-crc-2ql7n\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.286928 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.312927 4941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2ql7n_crc-storage_a01e4266-4013-425b-b0d0-fccfaae00f13_0(eaab6d36783262cac012aacfb477dd58b51bc2615fc4423c316255d656bb54d0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.312988 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2ql7n_crc-storage_a01e4266-4013-425b-b0d0-fccfaae00f13_0(eaab6d36783262cac012aacfb477dd58b51bc2615fc4423c316255d656bb54d0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.313012 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2ql7n_crc-storage_a01e4266-4013-425b-b0d0-fccfaae00f13_0(eaab6d36783262cac012aacfb477dd58b51bc2615fc4423c316255d656bb54d0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.313058 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2ql7n_crc-storage(a01e4266-4013-425b-b0d0-fccfaae00f13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2ql7n_crc-storage(a01e4266-4013-425b-b0d0-fccfaae00f13)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2ql7n_crc-storage_a01e4266-4013-425b-b0d0-fccfaae00f13_0(eaab6d36783262cac012aacfb477dd58b51bc2615fc4423c316255d656bb54d0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2ql7n" podUID="a01e4266-4013-425b-b0d0-fccfaae00f13" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.735259 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.735907 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.736213 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.754204 4941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2ql7n_crc-storage_a01e4266-4013-425b-b0d0-fccfaae00f13_0(a975c02bd99168d9ec33a782e9beeb112a22c2ef628020a4f55fb3b6845e0d8c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.754271 4941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2ql7n_crc-storage_a01e4266-4013-425b-b0d0-fccfaae00f13_0(a975c02bd99168d9ec33a782e9beeb112a22c2ef628020a4f55fb3b6845e0d8c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.754300 4941 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2ql7n_crc-storage_a01e4266-4013-425b-b0d0-fccfaae00f13_0(a975c02bd99168d9ec33a782e9beeb112a22c2ef628020a4f55fb3b6845e0d8c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:05 crc kubenswrapper[4941]: E0227 19:48:05.754356 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2ql7n_crc-storage(a01e4266-4013-425b-b0d0-fccfaae00f13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2ql7n_crc-storage(a01e4266-4013-425b-b0d0-fccfaae00f13)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2ql7n_crc-storage_a01e4266-4013-425b-b0d0-fccfaae00f13_0(a975c02bd99168d9ec33a782e9beeb112a22c2ef628020a4f55fb3b6845e0d8c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2ql7n" podUID="a01e4266-4013-425b-b0d0-fccfaae00f13" Feb 27 19:48:05 crc kubenswrapper[4941]: I0227 19:48:05.765858 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:48:06 crc kubenswrapper[4941]: E0227 19:48:06.470521 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:48:11 crc kubenswrapper[4941]: I0227 19:48:11.864088 4941 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 19:48:15 crc kubenswrapper[4941]: E0227 19:48:15.470977 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:48:15 crc kubenswrapper[4941]: E0227 19:48:15.471711 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:48:17 crc kubenswrapper[4941]: I0227 19:48:17.466899 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:17 crc kubenswrapper[4941]: I0227 19:48:17.467882 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:48:17 crc kubenswrapper[4941]: I0227 19:48:17.876570 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537028-bfg7w"] Feb 27 19:48:17 crc kubenswrapper[4941]: W0227 19:48:17.891623 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a332721_1e6a_4f6b_a4ff_a0943263f545.slice/crio-d09adf0d7888c39586090046d754e08346ae8fec3f38cfdd62a80741043b4db4 WatchSource:0}: Error finding container d09adf0d7888c39586090046d754e08346ae8fec3f38cfdd62a80741043b4db4: Status 404 returned error can't find the container with id d09adf0d7888c39586090046d754e08346ae8fec3f38cfdd62a80741043b4db4 Feb 27 19:48:18 crc kubenswrapper[4941]: I0227 19:48:18.825224 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" event={"ID":"4a332721-1e6a-4f6b-a4ff-a0943263f545","Type":"ContainerStarted","Data":"d09adf0d7888c39586090046d754e08346ae8fec3f38cfdd62a80741043b4db4"} Feb 27 19:48:18 crc kubenswrapper[4941]: E0227 19:48:18.833802 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:48:18 crc kubenswrapper[4941]: E0227 19:48:18.833993 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:48:18 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:48:18 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bm949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537028-bfg7w_openshift-infra(4a332721-1e6a-4f6b-a4ff-a0943263f545): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:48:18 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:48:18 crc kubenswrapper[4941]: E0227 19:48:18.835200 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:48:19 crc kubenswrapper[4941]: E0227 19:48:19.832286 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:48:20 crc kubenswrapper[4941]: I0227 19:48:20.466552 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:20 crc kubenswrapper[4941]: I0227 19:48:20.467294 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:20 crc kubenswrapper[4941]: E0227 19:48:20.469319 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:48:20 crc kubenswrapper[4941]: I0227 19:48:20.911410 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2ql7n"] Feb 27 19:48:20 crc kubenswrapper[4941]: W0227 19:48:20.924850 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda01e4266_4013_425b_b0d0_fccfaae00f13.slice/crio-fcdfcc143212c9ed40223dbec06c2472dc153ff1b52e4e21b50f8cc40facac66 WatchSource:0}: Error finding container fcdfcc143212c9ed40223dbec06c2472dc153ff1b52e4e21b50f8cc40facac66: Status 404 returned error can't find the container with id fcdfcc143212c9ed40223dbec06c2472dc153ff1b52e4e21b50f8cc40facac66 Feb 27 19:48:21 crc kubenswrapper[4941]: I0227 19:48:21.842229 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2ql7n" event={"ID":"a01e4266-4013-425b-b0d0-fccfaae00f13","Type":"ContainerStarted","Data":"fcdfcc143212c9ed40223dbec06c2472dc153ff1b52e4e21b50f8cc40facac66"} Feb 27 19:48:22 crc kubenswrapper[4941]: I0227 19:48:22.848329 4941 generic.go:334] "Generic (PLEG): container finished" podID="a01e4266-4013-425b-b0d0-fccfaae00f13" containerID="eea660daf763a8758f468b7fef2836a91a8d96cf04fce648e0324ff836d47285" exitCode=0 Feb 27 19:48:22 crc kubenswrapper[4941]: I0227 19:48:22.848409 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2ql7n" event={"ID":"a01e4266-4013-425b-b0d0-fccfaae00f13","Type":"ContainerDied","Data":"eea660daf763a8758f468b7fef2836a91a8d96cf04fce648e0324ff836d47285"} Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.069114 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.190000 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a01e4266-4013-425b-b0d0-fccfaae00f13-node-mnt\") pod \"a01e4266-4013-425b-b0d0-fccfaae00f13\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.190132 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a01e4266-4013-425b-b0d0-fccfaae00f13-crc-storage\") pod \"a01e4266-4013-425b-b0d0-fccfaae00f13\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.190248 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a01e4266-4013-425b-b0d0-fccfaae00f13-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a01e4266-4013-425b-b0d0-fccfaae00f13" (UID: "a01e4266-4013-425b-b0d0-fccfaae00f13"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.190339 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsmlx\" (UniqueName: \"kubernetes.io/projected/a01e4266-4013-425b-b0d0-fccfaae00f13-kube-api-access-nsmlx\") pod \"a01e4266-4013-425b-b0d0-fccfaae00f13\" (UID: \"a01e4266-4013-425b-b0d0-fccfaae00f13\") " Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.190832 4941 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a01e4266-4013-425b-b0d0-fccfaae00f13-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.197180 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01e4266-4013-425b-b0d0-fccfaae00f13-kube-api-access-nsmlx" (OuterVolumeSpecName: "kube-api-access-nsmlx") pod "a01e4266-4013-425b-b0d0-fccfaae00f13" (UID: "a01e4266-4013-425b-b0d0-fccfaae00f13"). InnerVolumeSpecName "kube-api-access-nsmlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.208604 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a01e4266-4013-425b-b0d0-fccfaae00f13-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a01e4266-4013-425b-b0d0-fccfaae00f13" (UID: "a01e4266-4013-425b-b0d0-fccfaae00f13"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.292064 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsmlx\" (UniqueName: \"kubernetes.io/projected/a01e4266-4013-425b-b0d0-fccfaae00f13-kube-api-access-nsmlx\") on node \"crc\" DevicePath \"\"" Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.292383 4941 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a01e4266-4013-425b-b0d0-fccfaae00f13-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.868348 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2ql7n" event={"ID":"a01e4266-4013-425b-b0d0-fccfaae00f13","Type":"ContainerDied","Data":"fcdfcc143212c9ed40223dbec06c2472dc153ff1b52e4e21b50f8cc40facac66"} Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.868406 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcdfcc143212c9ed40223dbec06c2472dc153ff1b52e4e21b50f8cc40facac66" Feb 27 19:48:24 crc kubenswrapper[4941]: I0227 19:48:24.868455 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ql7n" Feb 27 19:48:27 crc kubenswrapper[4941]: E0227 19:48:27.469690 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:48:28 crc kubenswrapper[4941]: I0227 19:48:28.497067 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9242g" Feb 27 19:48:29 crc kubenswrapper[4941]: E0227 19:48:29.471225 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:48:31 crc kubenswrapper[4941]: E0227 19:48:31.469225 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:48:33 crc kubenswrapper[4941]: I0227 19:48:33.469272 4941 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:48:34 crc kubenswrapper[4941]: E0227 19:48:34.378506 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:48:34 crc kubenswrapper[4941]: E0227 19:48:34.378689 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:48:34 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:48:34 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bm949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537028-bfg7w_openshift-infra(4a332721-1e6a-4f6b-a4ff-a0943263f545): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:48:34 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:48:34 crc kubenswrapper[4941]: E0227 19:48:34.380307 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:48:41 crc kubenswrapper[4941]: E0227 19:48:41.469772 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:48:42 crc kubenswrapper[4941]: E0227 19:48:42.473313 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:48:45 crc kubenswrapper[4941]: E0227 19:48:45.468759 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-whvl4" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" Feb 27 19:48:49 crc kubenswrapper[4941]: E0227 19:48:49.469257 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:48:54 crc kubenswrapper[4941]: E0227 19:48:54.468874 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:48:54 crc kubenswrapper[4941]: E0227 19:48:54.469099 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:48:59 crc kubenswrapper[4941]: I0227 19:48:59.851528 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:48:59 crc kubenswrapper[4941]: I0227 19:48:59.852380 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:49:01 crc kubenswrapper[4941]: E0227 19:49:01.249514 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:49:01 crc kubenswrapper[4941]: E0227 19:49:01.249913 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:49:01 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:49:01 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bm949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537028-bfg7w_openshift-infra(4a332721-1e6a-4f6b-a4ff-a0943263f545): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:49:01 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:49:01 crc kubenswrapper[4941]: E0227 19:49:01.251066 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:49:02 crc kubenswrapper[4941]: I0227 19:49:02.085508 4941 generic.go:334] "Generic (PLEG): container finished" podID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" containerID="fa25ce9486830c2a08037587a6a7bf82ff0c027c1328886ab4ddb89b97577d6e" exitCode=0 Feb 27 19:49:02 crc kubenswrapper[4941]: I0227 19:49:02.085595 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537026-whvl4" event={"ID":"30ae0dbf-9bfb-4038-98a5-1fc39572c5b4","Type":"ContainerDied","Data":"fa25ce9486830c2a08037587a6a7bf82ff0c027c1328886ab4ddb89b97577d6e"} Feb 27 19:49:03 crc kubenswrapper[4941]: I0227 19:49:03.332648 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537026-whvl4" Feb 27 19:49:03 crc kubenswrapper[4941]: I0227 19:49:03.461317 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvgv6\" (UniqueName: \"kubernetes.io/projected/30ae0dbf-9bfb-4038-98a5-1fc39572c5b4-kube-api-access-pvgv6\") pod \"30ae0dbf-9bfb-4038-98a5-1fc39572c5b4\" (UID: \"30ae0dbf-9bfb-4038-98a5-1fc39572c5b4\") " Feb 27 19:49:03 crc kubenswrapper[4941]: I0227 19:49:03.466384 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ae0dbf-9bfb-4038-98a5-1fc39572c5b4-kube-api-access-pvgv6" (OuterVolumeSpecName: "kube-api-access-pvgv6") pod "30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" (UID: "30ae0dbf-9bfb-4038-98a5-1fc39572c5b4"). InnerVolumeSpecName "kube-api-access-pvgv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:49:03 crc kubenswrapper[4941]: I0227 19:49:03.563242 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvgv6\" (UniqueName: \"kubernetes.io/projected/30ae0dbf-9bfb-4038-98a5-1fc39572c5b4-kube-api-access-pvgv6\") on node \"crc\" DevicePath \"\"" Feb 27 19:49:04 crc kubenswrapper[4941]: I0227 19:49:04.097914 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537026-whvl4" event={"ID":"30ae0dbf-9bfb-4038-98a5-1fc39572c5b4","Type":"ContainerDied","Data":"fda5695d7d1d7549d2013fbe9dc032fdcb9a8c5560f292079dc0c7de1b13db20"} Feb 27 19:49:04 crc kubenswrapper[4941]: I0227 19:49:04.097949 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda5695d7d1d7549d2013fbe9dc032fdcb9a8c5560f292079dc0c7de1b13db20" Feb 27 19:49:04 crc kubenswrapper[4941]: I0227 19:49:04.097997 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537026-whvl4" Feb 27 19:49:04 crc kubenswrapper[4941]: I0227 19:49:04.394164 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537020-dtt9s"] Feb 27 19:49:04 crc kubenswrapper[4941]: I0227 19:49:04.397187 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537020-dtt9s"] Feb 27 19:49:04 crc kubenswrapper[4941]: I0227 19:49:04.476727 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92b6a49-76c7-44fd-8610-9918071ec1ae" path="/var/lib/kubelet/pods/a92b6a49-76c7-44fd-8610-9918071ec1ae/volumes" Feb 27 19:49:09 crc kubenswrapper[4941]: E0227 19:49:09.470925 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:49:09 crc kubenswrapper[4941]: E0227 19:49:09.471673 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:49:15 crc kubenswrapper[4941]: E0227 19:49:15.469272 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:49:20 crc kubenswrapper[4941]: E0227 19:49:20.468988 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:49:22 crc kubenswrapper[4941]: E0227 19:49:22.477967 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:49:28 crc kubenswrapper[4941]: E0227 19:49:28.471161 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:49:29 crc kubenswrapper[4941]: I0227 19:49:29.851751 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:49:29 crc kubenswrapper[4941]: I0227 19:49:29.851844 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:49:33 crc kubenswrapper[4941]: E0227 19:49:33.471425 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:49:35 crc kubenswrapper[4941]: E0227 19:49:35.468652 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:49:39 crc kubenswrapper[4941]: E0227 19:49:39.468878 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:49:43 crc kubenswrapper[4941]: I0227 19:49:43.475770 4941 scope.go:117] "RemoveContainer" containerID="795ef0d3eb8bcaac5780f8d47cc9f00b6eaa24afe889cc1ec62a621daa604867" Feb 27 19:49:47 crc kubenswrapper[4941]: E0227 19:49:47.469612 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:49:47 crc kubenswrapper[4941]: E0227 19:49:47.469752 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:49:54 crc kubenswrapper[4941]: E0227 19:49:54.226785 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:49:54 crc kubenswrapper[4941]: E0227 19:49:54.227293 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:49:54 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:49:54 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bm949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537028-bfg7w_openshift-infra(4a332721-1e6a-4f6b-a4ff-a0943263f545): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:49:54 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:49:54 crc kubenswrapper[4941]: E0227 19:49:54.228567 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:49:59 crc kubenswrapper[4941]: I0227 19:49:59.851069 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:49:59 crc kubenswrapper[4941]: I0227 19:49:59.851713 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:49:59 crc kubenswrapper[4941]: I0227 19:49:59.851768 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:49:59 crc kubenswrapper[4941]: I0227 19:49:59.852506 4941 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7980c890302f1228f0f62409ae6b09a7950440b361f4647d2f4be6a242445589"} pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:49:59 crc kubenswrapper[4941]: I0227 19:49:59.852565 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" containerID="cri-o://7980c890302f1228f0f62409ae6b09a7950440b361f4647d2f4be6a242445589" gracePeriod=600 Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.145219 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537030-89szz"] Feb 27 19:50:00 crc kubenswrapper[4941]: E0227 19:50:00.145952 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01e4266-4013-425b-b0d0-fccfaae00f13" containerName="storage" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.145975 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01e4266-4013-425b-b0d0-fccfaae00f13" containerName="storage" Feb 27 19:50:00 crc kubenswrapper[4941]: E0227 19:50:00.146008 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" containerName="oc" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.146019 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" containerName="oc" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.146161 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" containerName="oc" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.146183 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01e4266-4013-425b-b0d0-fccfaae00f13" containerName="storage" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.146773 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537030-89szz" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.157755 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537030-89szz"] Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.227389 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhrc\" (UniqueName: \"kubernetes.io/projected/984d38fd-74e0-4716-a788-dadecfa16dc5-kube-api-access-2dhrc\") pod \"auto-csr-approver-29537030-89szz\" (UID: \"984d38fd-74e0-4716-a788-dadecfa16dc5\") " pod="openshift-infra/auto-csr-approver-29537030-89szz" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.329158 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhrc\" (UniqueName: \"kubernetes.io/projected/984d38fd-74e0-4716-a788-dadecfa16dc5-kube-api-access-2dhrc\") pod \"auto-csr-approver-29537030-89szz\" (UID: \"984d38fd-74e0-4716-a788-dadecfa16dc5\") " pod="openshift-infra/auto-csr-approver-29537030-89szz" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.358343 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhrc\" (UniqueName: \"kubernetes.io/projected/984d38fd-74e0-4716-a788-dadecfa16dc5-kube-api-access-2dhrc\") pod \"auto-csr-approver-29537030-89szz\" (UID: \"984d38fd-74e0-4716-a788-dadecfa16dc5\") " pod="openshift-infra/auto-csr-approver-29537030-89szz" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.443733 4941 generic.go:334] "Generic (PLEG): container finished" podID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerID="7980c890302f1228f0f62409ae6b09a7950440b361f4647d2f4be6a242445589" exitCode=0 Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.443779 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerDied","Data":"7980c890302f1228f0f62409ae6b09a7950440b361f4647d2f4be6a242445589"} Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.443811 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"bebc3d1d72c88a0e2bf6e3cf3f2644b8b14415b97b25e73e4e4239960c6af6a7"} Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.443830 4941 scope.go:117] "RemoveContainer" containerID="9ea4133087231f38e402e96728a49cb466bd287c5749691bbf385d19714cdee7" Feb 27 19:50:00 crc kubenswrapper[4941]: E0227 19:50:00.474146 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.498697 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537030-89szz" Feb 27 19:50:00 crc kubenswrapper[4941]: I0227 19:50:00.666661 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537030-89szz"] Feb 27 19:50:00 crc kubenswrapper[4941]: W0227 19:50:00.671819 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod984d38fd_74e0_4716_a788_dadecfa16dc5.slice/crio-756c3badcdea75bd9d7a908c568e72a003bb967128b55a6aa6964bc1e6a5d08c WatchSource:0}: Error finding container 756c3badcdea75bd9d7a908c568e72a003bb967128b55a6aa6964bc1e6a5d08c: Status 404 returned error can't find the container with id 756c3badcdea75bd9d7a908c568e72a003bb967128b55a6aa6964bc1e6a5d08c Feb 27 19:50:01 crc kubenswrapper[4941]: I0227 19:50:01.452192 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537030-89szz" event={"ID":"984d38fd-74e0-4716-a788-dadecfa16dc5","Type":"ContainerStarted","Data":"756c3badcdea75bd9d7a908c568e72a003bb967128b55a6aa6964bc1e6a5d08c"} Feb 27 19:50:01 crc kubenswrapper[4941]: E0227 19:50:01.469250 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:50:02 crc kubenswrapper[4941]: E0227 19:50:02.273272 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:50:02 crc kubenswrapper[4941]: E0227 19:50:02.274032 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:50:02 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:50:02 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dhrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537030-89szz_openshift-infra(984d38fd-74e0-4716-a788-dadecfa16dc5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:50:02 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:50:02 crc kubenswrapper[4941]: E0227 19:50:02.275539 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537030-89szz" podUID="984d38fd-74e0-4716-a788-dadecfa16dc5" Feb 27 19:50:02 crc kubenswrapper[4941]: E0227 19:50:02.462368 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537030-89szz" podUID="984d38fd-74e0-4716-a788-dadecfa16dc5" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.758022 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dcbj"] Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.759502 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.771535 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dcbj"] Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.812251 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-catalog-content\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.812581 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-utilities\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.812609 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrw6\" (UniqueName: \"kubernetes.io/projected/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-kube-api-access-qtrw6\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.913747 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-catalog-content\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.913827 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-utilities\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.913863 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtrw6\" (UniqueName: \"kubernetes.io/projected/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-kube-api-access-qtrw6\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.914496 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-utilities\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.914596 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-catalog-content\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:06 crc kubenswrapper[4941]: I0227 19:50:06.938593 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtrw6\" (UniqueName: \"kubernetes.io/projected/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-kube-api-access-qtrw6\") pod \"certified-operators-9dcbj\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:07 crc kubenswrapper[4941]: I0227 19:50:07.083989 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:07 crc kubenswrapper[4941]: I0227 19:50:07.307823 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dcbj"] Feb 27 19:50:07 crc kubenswrapper[4941]: E0227 19:50:07.471223 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:50:07 crc kubenswrapper[4941]: I0227 19:50:07.493790 4941 generic.go:334] "Generic (PLEG): container finished" podID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerID="c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d" exitCode=0 Feb 27 19:50:07 crc kubenswrapper[4941]: I0227 19:50:07.493879 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dcbj" event={"ID":"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa","Type":"ContainerDied","Data":"c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d"} Feb 27 19:50:07 crc kubenswrapper[4941]: I0227 19:50:07.493929 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dcbj" event={"ID":"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa","Type":"ContainerStarted","Data":"2c1a438845191667096060894c6929b8e14ec42bdae7531acca80bb8347a1082"} Feb 27 19:50:08 crc kubenswrapper[4941]: I0227 19:50:08.512208 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dcbj" event={"ID":"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa","Type":"ContainerStarted","Data":"96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486"} Feb 27 19:50:09 crc kubenswrapper[4941]: I0227 19:50:09.521636 4941 generic.go:334] "Generic (PLEG): container finished" podID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerID="96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486" exitCode=0 Feb 27 19:50:09 crc kubenswrapper[4941]: I0227 19:50:09.521722 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dcbj" event={"ID":"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa","Type":"ContainerDied","Data":"96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486"} Feb 27 19:50:10 crc kubenswrapper[4941]: I0227 19:50:10.530237 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dcbj" event={"ID":"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa","Type":"ContainerStarted","Data":"44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9"} Feb 27 19:50:10 crc kubenswrapper[4941]: I0227 19:50:10.551909 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dcbj" podStartSLOduration=2.12021027 podStartE2EDuration="4.551880025s" podCreationTimestamp="2026-02-27 19:50:06 +0000 UTC" firstStartedPulling="2026-02-27 19:50:07.496195439 +0000 UTC m=+925.757335869" lastFinishedPulling="2026-02-27 19:50:09.927865184 +0000 UTC m=+928.189005624" observedRunningTime="2026-02-27 19:50:10.55028507 +0000 UTC m=+928.811425530" watchObservedRunningTime="2026-02-27 19:50:10.551880025 +0000 UTC m=+928.813020475" Feb 27 19:50:12 crc kubenswrapper[4941]: I0227 19:50:12.953217 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7tsp"] Feb 27 19:50:12 crc kubenswrapper[4941]: I0227 19:50:12.954675 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:12 crc kubenswrapper[4941]: I0227 19:50:12.965119 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7tsp"] Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.001733 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6244\" (UniqueName: \"kubernetes.io/projected/7ea288e3-3511-4870-8338-efb043ceb7b4-kube-api-access-j6244\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.001801 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-catalog-content\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.001850 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-utilities\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.103133 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6244\" (UniqueName: \"kubernetes.io/projected/7ea288e3-3511-4870-8338-efb043ceb7b4-kube-api-access-j6244\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.103198 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-catalog-content\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.103249 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-utilities\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.104039 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-utilities\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.104123 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-catalog-content\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.129107 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6244\" (UniqueName: \"kubernetes.io/projected/7ea288e3-3511-4870-8338-efb043ceb7b4-kube-api-access-j6244\") pod \"redhat-marketplace-w7tsp\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.282178 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.528103 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7tsp"] Feb 27 19:50:13 crc kubenswrapper[4941]: I0227 19:50:13.547605 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7tsp" event={"ID":"7ea288e3-3511-4870-8338-efb043ceb7b4","Type":"ContainerStarted","Data":"5efdcf3ad36c3e736bb5774775c1a474025335cbd1c99f99d00a46c7e8d2b078"} Feb 27 19:50:14 crc kubenswrapper[4941]: E0227 19:50:14.409833 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:50:14 crc kubenswrapper[4941]: E0227 19:50:14.410030 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:50:14 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:50:14 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dhrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537030-89szz_openshift-infra(984d38fd-74e0-4716-a788-dadecfa16dc5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:50:14 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:50:14 crc kubenswrapper[4941]: E0227 19:50:14.411305 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537030-89szz" podUID="984d38fd-74e0-4716-a788-dadecfa16dc5" Feb 27 19:50:14 crc kubenswrapper[4941]: E0227 19:50:14.470153 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:50:14 crc kubenswrapper[4941]: E0227 19:50:14.470279 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:50:14 crc kubenswrapper[4941]: I0227 19:50:14.556207 4941 generic.go:334] "Generic (PLEG): container finished" podID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerID="91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32" exitCode=0 Feb 27 19:50:14 crc kubenswrapper[4941]: I0227 19:50:14.556250 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7tsp" event={"ID":"7ea288e3-3511-4870-8338-efb043ceb7b4","Type":"ContainerDied","Data":"91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32"} Feb 27 19:50:15 crc kubenswrapper[4941]: I0227 19:50:15.565629 4941 generic.go:334] "Generic (PLEG): container finished" podID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerID="cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba" exitCode=0 Feb 27 19:50:15 crc kubenswrapper[4941]: I0227 19:50:15.565774 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7tsp" event={"ID":"7ea288e3-3511-4870-8338-efb043ceb7b4","Type":"ContainerDied","Data":"cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba"} Feb 27 19:50:16 crc kubenswrapper[4941]: I0227 19:50:16.575748 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7tsp" event={"ID":"7ea288e3-3511-4870-8338-efb043ceb7b4","Type":"ContainerStarted","Data":"84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84"} Feb 27 19:50:16 crc kubenswrapper[4941]: I0227 19:50:16.604133 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7tsp" podStartSLOduration=3.172600229 podStartE2EDuration="4.604114479s" podCreationTimestamp="2026-02-27 19:50:12 +0000 UTC" firstStartedPulling="2026-02-27 19:50:14.55817995 +0000 UTC m=+932.819320400" lastFinishedPulling="2026-02-27 19:50:15.98969423 +0000 UTC m=+934.250834650" observedRunningTime="2026-02-27 19:50:16.600883068 +0000 UTC m=+934.862023498" watchObservedRunningTime="2026-02-27 19:50:16.604114479 +0000 UTC m=+934.865254919" Feb 27 19:50:17 crc kubenswrapper[4941]: I0227 19:50:17.085050 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:17 crc kubenswrapper[4941]: I0227 19:50:17.085119 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:17 crc kubenswrapper[4941]: I0227 19:50:17.148307 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:17 crc kubenswrapper[4941]: I0227 19:50:17.630028 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:20 crc kubenswrapper[4941]: E0227 19:50:20.470068 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:50:20 crc kubenswrapper[4941]: I0227 19:50:20.755988 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dcbj"] Feb 27 19:50:20 crc kubenswrapper[4941]: I0227 19:50:20.756443 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9dcbj" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerName="registry-server" containerID="cri-o://44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9" gracePeriod=2 Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.174494 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.309424 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtrw6\" (UniqueName: \"kubernetes.io/projected/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-kube-api-access-qtrw6\") pod \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.309536 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-catalog-content\") pod \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.309573 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-utilities\") pod \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\" (UID: \"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa\") " Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.310518 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-utilities" (OuterVolumeSpecName: "utilities") pod "8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" (UID: "8ead7e2f-b52e-40bf-988f-ffc7c716f0aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.314562 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-kube-api-access-qtrw6" (OuterVolumeSpecName: "kube-api-access-qtrw6") pod "8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" (UID: "8ead7e2f-b52e-40bf-988f-ffc7c716f0aa"). InnerVolumeSpecName "kube-api-access-qtrw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.357315 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" (UID: "8ead7e2f-b52e-40bf-988f-ffc7c716f0aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.410849 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.410879 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.410892 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtrw6\" (UniqueName: \"kubernetes.io/projected/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa-kube-api-access-qtrw6\") on node \"crc\" DevicePath \"\"" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.607786 4941 generic.go:334] "Generic (PLEG): container finished" podID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerID="44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9" exitCode=0 Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.607861 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dcbj" event={"ID":"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa","Type":"ContainerDied","Data":"44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9"} Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.607870 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dcbj" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.607899 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dcbj" event={"ID":"8ead7e2f-b52e-40bf-988f-ffc7c716f0aa","Type":"ContainerDied","Data":"2c1a438845191667096060894c6929b8e14ec42bdae7531acca80bb8347a1082"} Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.607924 4941 scope.go:117] "RemoveContainer" containerID="44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.624790 4941 scope.go:117] "RemoveContainer" containerID="96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.652131 4941 scope.go:117] "RemoveContainer" containerID="c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.664938 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dcbj"] Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.668923 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9dcbj"] Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.678868 4941 scope.go:117] "RemoveContainer" containerID="44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9" Feb 27 19:50:21 crc kubenswrapper[4941]: E0227 19:50:21.679368 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9\": container with ID starting with 44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9 not found: ID does not exist" containerID="44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.679409 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9"} err="failed to get container status \"44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9\": rpc error: code = NotFound desc = could not find container \"44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9\": container with ID starting with 44afd41123a9d51b697c8fb7f310b9296278491f123ed529c9b0dc9508597ad9 not found: ID does not exist" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.679437 4941 scope.go:117] "RemoveContainer" containerID="96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486" Feb 27 19:50:21 crc kubenswrapper[4941]: E0227 19:50:21.679811 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486\": container with ID starting with 96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486 not found: ID does not exist" containerID="96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.679838 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486"} err="failed to get container status \"96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486\": rpc error: code = NotFound desc = could not find container \"96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486\": container with ID starting with 96926ff961e2b735ed364e41c70de1ce743ea8e1bfc87f84abd0af8a5f62c486 not found: ID does not exist" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.679858 4941 scope.go:117] "RemoveContainer" containerID="c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d" Feb 27 19:50:21 crc kubenswrapper[4941]: E0227 19:50:21.680052 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d\": container with ID starting with c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d not found: ID does not exist" containerID="c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d" Feb 27 19:50:21 crc kubenswrapper[4941]: I0227 19:50:21.680069 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d"} err="failed to get container status \"c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d\": rpc error: code = NotFound desc = could not find container \"c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d\": container with ID starting with c23c45ad1f596a58fc9e1f59d70c1e82c61a12ae7203de24212f2d024d59925d not found: ID does not exist" Feb 27 19:50:22 crc kubenswrapper[4941]: I0227 19:50:22.478023 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" path="/var/lib/kubelet/pods/8ead7e2f-b52e-40bf-988f-ffc7c716f0aa/volumes" Feb 27 19:50:23 crc kubenswrapper[4941]: I0227 19:50:23.283173 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:23 crc kubenswrapper[4941]: I0227 19:50:23.283518 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:23 crc kubenswrapper[4941]: I0227 19:50:23.346724 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:23 crc kubenswrapper[4941]: I0227 19:50:23.667917 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:26 crc kubenswrapper[4941]: I0227 19:50:26.548872 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7tsp"] Feb 27 19:50:26 crc kubenswrapper[4941]: I0227 19:50:26.549199 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w7tsp" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerName="registry-server" containerID="cri-o://84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84" gracePeriod=2 Feb 27 19:50:26 crc kubenswrapper[4941]: I0227 19:50:26.901576 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:26 crc kubenswrapper[4941]: I0227 19:50:26.981090 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-catalog-content\") pod \"7ea288e3-3511-4870-8338-efb043ceb7b4\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " Feb 27 19:50:26 crc kubenswrapper[4941]: I0227 19:50:26.981186 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-utilities\") pod \"7ea288e3-3511-4870-8338-efb043ceb7b4\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " Feb 27 19:50:26 crc kubenswrapper[4941]: I0227 19:50:26.981279 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6244\" (UniqueName: \"kubernetes.io/projected/7ea288e3-3511-4870-8338-efb043ceb7b4-kube-api-access-j6244\") pod \"7ea288e3-3511-4870-8338-efb043ceb7b4\" (UID: \"7ea288e3-3511-4870-8338-efb043ceb7b4\") " Feb 27 19:50:26 crc kubenswrapper[4941]: I0227 19:50:26.982351 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-utilities" (OuterVolumeSpecName: "utilities") pod "7ea288e3-3511-4870-8338-efb043ceb7b4" (UID: "7ea288e3-3511-4870-8338-efb043ceb7b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:50:26 crc kubenswrapper[4941]: I0227 19:50:26.987527 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea288e3-3511-4870-8338-efb043ceb7b4-kube-api-access-j6244" (OuterVolumeSpecName: "kube-api-access-j6244") pod "7ea288e3-3511-4870-8338-efb043ceb7b4" (UID: "7ea288e3-3511-4870-8338-efb043ceb7b4"). InnerVolumeSpecName "kube-api-access-j6244". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.008175 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ea288e3-3511-4870-8338-efb043ceb7b4" (UID: "7ea288e3-3511-4870-8338-efb043ceb7b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.082957 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6244\" (UniqueName: \"kubernetes.io/projected/7ea288e3-3511-4870-8338-efb043ceb7b4-kube-api-access-j6244\") on node \"crc\" DevicePath \"\"" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.082997 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.083009 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea288e3-3511-4870-8338-efb043ceb7b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.652073 4941 generic.go:334] "Generic (PLEG): container finished" podID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerID="84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84" exitCode=0 Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.652134 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7tsp" event={"ID":"7ea288e3-3511-4870-8338-efb043ceb7b4","Type":"ContainerDied","Data":"84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84"} Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.652172 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7tsp" event={"ID":"7ea288e3-3511-4870-8338-efb043ceb7b4","Type":"ContainerDied","Data":"5efdcf3ad36c3e736bb5774775c1a474025335cbd1c99f99d00a46c7e8d2b078"} Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.652199 4941 scope.go:117] "RemoveContainer" containerID="84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.652361 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7tsp" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.679911 4941 scope.go:117] "RemoveContainer" containerID="cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.685413 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7tsp"] Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.692169 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7tsp"] Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.714743 4941 scope.go:117] "RemoveContainer" containerID="91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.730346 4941 scope.go:117] "RemoveContainer" containerID="84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84" Feb 27 19:50:27 crc kubenswrapper[4941]: E0227 19:50:27.730935 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84\": container with ID starting with 84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84 not found: ID does not exist" containerID="84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.730975 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84"} err="failed to get container status \"84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84\": rpc error: code = NotFound desc = could not find container \"84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84\": container with ID starting with 84bd5d2280158012018f61199c8dc3be5b39041cbfe6e345c8fc52bb9d45bf84 not found: ID does not exist" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.731002 4941 scope.go:117] "RemoveContainer" containerID="cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba" Feb 27 19:50:27 crc kubenswrapper[4941]: E0227 19:50:27.731358 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba\": container with ID starting with cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba not found: ID does not exist" containerID="cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.731396 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba"} err="failed to get container status \"cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba\": rpc error: code = NotFound desc = could not find container \"cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba\": container with ID starting with cbbae8541948e967a6682a4748eda718fabd149877ed54aac2d7fc25bb095eba not found: ID does not exist" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.731421 4941 scope.go:117] "RemoveContainer" containerID="91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32" Feb 27 19:50:27 crc kubenswrapper[4941]: E0227 19:50:27.731657 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32\": container with ID starting with 91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32 not found: ID does not exist" containerID="91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32" Feb 27 19:50:27 crc kubenswrapper[4941]: I0227 19:50:27.731679 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32"} err="failed to get container status \"91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32\": rpc error: code = NotFound desc = could not find container \"91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32\": container with ID starting with 91d056bcd82e2b7f740bda3a107ed91275aff84d55a0b84100e8b0b771e60e32 not found: ID does not exist" Feb 27 19:50:28 crc kubenswrapper[4941]: E0227 19:50:28.468599 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:50:28 crc kubenswrapper[4941]: E0227 19:50:28.470565 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537030-89szz" podUID="984d38fd-74e0-4716-a788-dadecfa16dc5" Feb 27 19:50:28 crc kubenswrapper[4941]: E0227 19:50:28.475690 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:50:28 crc kubenswrapper[4941]: I0227 19:50:28.482068 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" path="/var/lib/kubelet/pods/7ea288e3-3511-4870-8338-efb043ceb7b4/volumes" Feb 27 19:50:33 crc kubenswrapper[4941]: E0227 19:50:33.469315 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:50:41 crc kubenswrapper[4941]: E0227 19:50:41.471602 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:50:41 crc kubenswrapper[4941]: I0227 19:50:41.757859 4941 generic.go:334] "Generic (PLEG): container finished" podID="984d38fd-74e0-4716-a788-dadecfa16dc5" containerID="fd9666a9ea5fc0498e6b42a5197fa32e6c76af3eadd62ec2a08c685aee9041b2" exitCode=0 Feb 27 19:50:41 crc kubenswrapper[4941]: I0227 19:50:41.758004 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537030-89szz" event={"ID":"984d38fd-74e0-4716-a788-dadecfa16dc5","Type":"ContainerDied","Data":"fd9666a9ea5fc0498e6b42a5197fa32e6c76af3eadd62ec2a08c685aee9041b2"} Feb 27 19:50:43 crc kubenswrapper[4941]: I0227 19:50:43.071927 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537030-89szz" Feb 27 19:50:43 crc kubenswrapper[4941]: I0227 19:50:43.196757 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhrc\" (UniqueName: \"kubernetes.io/projected/984d38fd-74e0-4716-a788-dadecfa16dc5-kube-api-access-2dhrc\") pod \"984d38fd-74e0-4716-a788-dadecfa16dc5\" (UID: \"984d38fd-74e0-4716-a788-dadecfa16dc5\") " Feb 27 19:50:43 crc kubenswrapper[4941]: I0227 19:50:43.204713 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/984d38fd-74e0-4716-a788-dadecfa16dc5-kube-api-access-2dhrc" (OuterVolumeSpecName: "kube-api-access-2dhrc") pod "984d38fd-74e0-4716-a788-dadecfa16dc5" (UID: "984d38fd-74e0-4716-a788-dadecfa16dc5"). InnerVolumeSpecName "kube-api-access-2dhrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:50:43 crc kubenswrapper[4941]: I0227 19:50:43.298363 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhrc\" (UniqueName: \"kubernetes.io/projected/984d38fd-74e0-4716-a788-dadecfa16dc5-kube-api-access-2dhrc\") on node \"crc\" DevicePath \"\"" Feb 27 19:50:43 crc kubenswrapper[4941]: E0227 19:50:43.469465 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:50:43 crc kubenswrapper[4941]: I0227 19:50:43.770878 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537030-89szz" event={"ID":"984d38fd-74e0-4716-a788-dadecfa16dc5","Type":"ContainerDied","Data":"756c3badcdea75bd9d7a908c568e72a003bb967128b55a6aa6964bc1e6a5d08c"} Feb 27 19:50:43 crc kubenswrapper[4941]: I0227 19:50:43.770921 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537030-89szz" Feb 27 19:50:43 crc kubenswrapper[4941]: I0227 19:50:43.770926 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="756c3badcdea75bd9d7a908c568e72a003bb967128b55a6aa6964bc1e6a5d08c" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.144529 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537022-s2ljb"] Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.150626 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537022-s2ljb"] Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353451 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ttm4p"] Feb 27 19:50:44 crc kubenswrapper[4941]: E0227 19:50:44.353689 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerName="extract-content" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353703 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerName="extract-content" Feb 27 19:50:44 crc kubenswrapper[4941]: E0227 19:50:44.353717 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984d38fd-74e0-4716-a788-dadecfa16dc5" containerName="oc" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353724 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="984d38fd-74e0-4716-a788-dadecfa16dc5" containerName="oc" Feb 27 19:50:44 crc kubenswrapper[4941]: E0227 19:50:44.353735 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerName="extract-content" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353742 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerName="extract-content" Feb 27 19:50:44 crc kubenswrapper[4941]: E0227 19:50:44.353751 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerName="registry-server" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353757 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerName="registry-server" Feb 27 19:50:44 crc kubenswrapper[4941]: E0227 19:50:44.353765 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerName="extract-utilities" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353772 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerName="extract-utilities" Feb 27 19:50:44 crc kubenswrapper[4941]: E0227 19:50:44.353779 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerName="extract-utilities" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353785 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerName="extract-utilities" Feb 27 19:50:44 crc kubenswrapper[4941]: E0227 19:50:44.353794 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerName="registry-server" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353801 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerName="registry-server" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353886 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ead7e2f-b52e-40bf-988f-ffc7c716f0aa" containerName="registry-server" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353896 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea288e3-3511-4870-8338-efb043ceb7b4" containerName="registry-server" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.353907 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="984d38fd-74e0-4716-a788-dadecfa16dc5" containerName="oc" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.354613 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.363967 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttm4p"] Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.413347 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d15d663-67b1-48e2-8e06-c4c27858e991-catalog-content\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.413710 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d15d663-67b1-48e2-8e06-c4c27858e991-utilities\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.413832 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t46t\" (UniqueName: \"kubernetes.io/projected/8d15d663-67b1-48e2-8e06-c4c27858e991-kube-api-access-9t46t\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.472077 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ecd957-4632-4dfb-9f87-28a2a83197ad" path="/var/lib/kubelet/pods/39ecd957-4632-4dfb-9f87-28a2a83197ad/volumes" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.515173 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t46t\" (UniqueName: \"kubernetes.io/projected/8d15d663-67b1-48e2-8e06-c4c27858e991-kube-api-access-9t46t\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.515258 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d15d663-67b1-48e2-8e06-c4c27858e991-catalog-content\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.515303 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d15d663-67b1-48e2-8e06-c4c27858e991-utilities\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.515885 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d15d663-67b1-48e2-8e06-c4c27858e991-utilities\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.515966 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d15d663-67b1-48e2-8e06-c4c27858e991-catalog-content\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.538658 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t46t\" (UniqueName: \"kubernetes.io/projected/8d15d663-67b1-48e2-8e06-c4c27858e991-kube-api-access-9t46t\") pod \"community-operators-ttm4p\" (UID: \"8d15d663-67b1-48e2-8e06-c4c27858e991\") " pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:44 crc kubenswrapper[4941]: I0227 19:50:44.674401 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttm4p" Feb 27 19:50:45 crc kubenswrapper[4941]: I0227 19:50:45.159338 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttm4p"] Feb 27 19:50:45 crc kubenswrapper[4941]: I0227 19:50:45.796965 4941 generic.go:334] "Generic (PLEG): container finished" podID="8d15d663-67b1-48e2-8e06-c4c27858e991" containerID="9b84be54efc5730470168d2e94b69cd0bfcfada7a5698ec353afc8c4a1c4d821" exitCode=0 Feb 27 19:50:45 crc kubenswrapper[4941]: I0227 19:50:45.797091 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttm4p" event={"ID":"8d15d663-67b1-48e2-8e06-c4c27858e991","Type":"ContainerDied","Data":"9b84be54efc5730470168d2e94b69cd0bfcfada7a5698ec353afc8c4a1c4d821"} Feb 27 19:50:45 crc kubenswrapper[4941]: I0227 19:50:45.797359 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttm4p" event={"ID":"8d15d663-67b1-48e2-8e06-c4c27858e991","Type":"ContainerStarted","Data":"101631da6347c6e288bf82cb341a4e8829aedb9a51332adccedb87d1781789f1"} Feb 27 19:50:47 crc kubenswrapper[4941]: E0227 19:50:47.411755 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:50:47 crc kubenswrapper[4941]: E0227 19:50:47.412057 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t46t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ttm4p_openshift-marketplace(8d15d663-67b1-48e2-8e06-c4c27858e991): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:50:47 crc kubenswrapper[4941]: E0227 19:50:47.413838 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:50:47 crc kubenswrapper[4941]: E0227 19:50:47.470783 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:50:47 crc kubenswrapper[4941]: E0227 19:50:47.811957 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:50:54 crc kubenswrapper[4941]: E0227 19:50:54.469600 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:50:56 crc kubenswrapper[4941]: E0227 19:50:56.469368 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:50:59 crc kubenswrapper[4941]: E0227 19:50:59.469341 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:51:02 crc kubenswrapper[4941]: E0227 19:51:02.228782 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:51:02 crc kubenswrapper[4941]: E0227 19:51:02.229367 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t46t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ttm4p_openshift-marketplace(8d15d663-67b1-48e2-8e06-c4c27858e991): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:51:02 crc kubenswrapper[4941]: E0227 19:51:02.230659 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:51:06 crc kubenswrapper[4941]: E0227 19:51:06.468980 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:51:07 crc kubenswrapper[4941]: E0227 19:51:07.469229 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:51:10 crc kubenswrapper[4941]: E0227 19:51:10.468832 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" Feb 27 19:51:15 crc kubenswrapper[4941]: E0227 19:51:15.469325 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:51:17 crc kubenswrapper[4941]: E0227 19:51:17.469004 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:51:19 crc kubenswrapper[4941]: E0227 19:51:19.468664 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:51:22 crc kubenswrapper[4941]: I0227 19:51:22.955080 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sx9cv"] Feb 27 19:51:22 crc kubenswrapper[4941]: I0227 19:51:22.957139 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:22 crc kubenswrapper[4941]: I0227 19:51:22.967203 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx9cv"] Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.022639 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62htp\" (UniqueName: \"kubernetes.io/projected/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-kube-api-access-62htp\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.022693 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-catalog-content\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.022914 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-utilities\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.124601 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-catalog-content\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.125015 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-utilities\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.125058 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62htp\" (UniqueName: \"kubernetes.io/projected/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-kube-api-access-62htp\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.125119 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-catalog-content\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.125837 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-utilities\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.144125 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62htp\" (UniqueName: \"kubernetes.io/projected/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-kube-api-access-62htp\") pod \"redhat-operators-sx9cv\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.326539 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:51:23 crc kubenswrapper[4941]: I0227 19:51:23.543979 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx9cv"] Feb 27 19:51:23 crc kubenswrapper[4941]: W0227 19:51:23.553221 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd498e96f_290e_4bd5_a6dd_5cdbbd016a17.slice/crio-c23e0e6f88b915944b93d573529c245032c9107554034a29b846256dae52c26f WatchSource:0}: Error finding container c23e0e6f88b915944b93d573529c245032c9107554034a29b846256dae52c26f: Status 404 returned error can't find the container with id c23e0e6f88b915944b93d573529c245032c9107554034a29b846256dae52c26f Feb 27 19:51:24 crc kubenswrapper[4941]: I0227 19:51:24.026170 4941 generic.go:334] "Generic (PLEG): container finished" podID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerID="bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a" exitCode=0 Feb 27 19:51:24 crc kubenswrapper[4941]: I0227 19:51:24.026211 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx9cv" event={"ID":"d498e96f-290e-4bd5-a6dd-5cdbbd016a17","Type":"ContainerDied","Data":"bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a"} Feb 27 19:51:24 crc kubenswrapper[4941]: I0227 19:51:24.026237 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx9cv" event={"ID":"d498e96f-290e-4bd5-a6dd-5cdbbd016a17","Type":"ContainerStarted","Data":"c23e0e6f88b915944b93d573529c245032c9107554034a29b846256dae52c26f"} Feb 27 19:51:24 crc kubenswrapper[4941]: E0227 19:51:24.846093 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:51:24 crc kubenswrapper[4941]: E0227 19:51:24.846605 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62htp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx9cv_openshift-marketplace(d498e96f-290e-4bd5-a6dd-5cdbbd016a17): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:51:24 crc kubenswrapper[4941]: E0227 19:51:24.847889 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" Feb 27 19:51:25 crc kubenswrapper[4941]: E0227 19:51:25.034984 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" Feb 27 19:51:27 crc kubenswrapper[4941]: I0227 19:51:27.047791 4941 generic.go:334] "Generic (PLEG): container finished" podID="4a332721-1e6a-4f6b-a4ff-a0943263f545" containerID="f97c11f0e18893095b700ef93ea7a61ad20da519c0b2998f3dee9e9522ee26d7" exitCode=0 Feb 27 19:51:27 crc kubenswrapper[4941]: I0227 19:51:27.047855 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" event={"ID":"4a332721-1e6a-4f6b-a4ff-a0943263f545","Type":"ContainerDied","Data":"f97c11f0e18893095b700ef93ea7a61ad20da519c0b2998f3dee9e9522ee26d7"} Feb 27 19:51:28 crc kubenswrapper[4941]: I0227 19:51:28.343158 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:51:28 crc kubenswrapper[4941]: I0227 19:51:28.390506 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm949\" (UniqueName: \"kubernetes.io/projected/4a332721-1e6a-4f6b-a4ff-a0943263f545-kube-api-access-bm949\") pod \"4a332721-1e6a-4f6b-a4ff-a0943263f545\" (UID: \"4a332721-1e6a-4f6b-a4ff-a0943263f545\") " Feb 27 19:51:28 crc kubenswrapper[4941]: I0227 19:51:28.396306 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a332721-1e6a-4f6b-a4ff-a0943263f545-kube-api-access-bm949" (OuterVolumeSpecName: "kube-api-access-bm949") pod "4a332721-1e6a-4f6b-a4ff-a0943263f545" (UID: "4a332721-1e6a-4f6b-a4ff-a0943263f545"). InnerVolumeSpecName "kube-api-access-bm949". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:51:28 crc kubenswrapper[4941]: I0227 19:51:28.497498 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm949\" (UniqueName: \"kubernetes.io/projected/4a332721-1e6a-4f6b-a4ff-a0943263f545-kube-api-access-bm949\") on node \"crc\" DevicePath \"\"" Feb 27 19:51:29 crc kubenswrapper[4941]: I0227 19:51:29.064082 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" event={"ID":"4a332721-1e6a-4f6b-a4ff-a0943263f545","Type":"ContainerDied","Data":"d09adf0d7888c39586090046d754e08346ae8fec3f38cfdd62a80741043b4db4"} Feb 27 19:51:29 crc kubenswrapper[4941]: I0227 19:51:29.064138 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09adf0d7888c39586090046d754e08346ae8fec3f38cfdd62a80741043b4db4" Feb 27 19:51:29 crc kubenswrapper[4941]: I0227 19:51:29.064201 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-bfg7w" Feb 27 19:51:29 crc kubenswrapper[4941]: E0227 19:51:29.071269 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:51:29 crc kubenswrapper[4941]: E0227 19:51:29.071559 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t46t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ttm4p_openshift-marketplace(8d15d663-67b1-48e2-8e06-c4c27858e991): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:51:29 crc kubenswrapper[4941]: E0227 19:51:29.072751 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:51:29 crc kubenswrapper[4941]: I0227 19:51:29.407149 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537024-q6npl"] Feb 27 19:51:29 crc kubenswrapper[4941]: I0227 19:51:29.412808 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537024-q6npl"] Feb 27 19:51:30 crc kubenswrapper[4941]: I0227 19:51:30.477907 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ef0d4b-a801-47ef-98fe-b2b078761207" path="/var/lib/kubelet/pods/76ef0d4b-a801-47ef-98fe-b2b078761207/volumes" Feb 27 19:51:31 crc kubenswrapper[4941]: E0227 19:51:31.469955 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:51:32 crc kubenswrapper[4941]: E0227 19:51:32.474052 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:51:40 crc kubenswrapper[4941]: E0227 19:51:40.418643 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:51:40 crc kubenswrapper[4941]: E0227 19:51:40.419540 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62htp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx9cv_openshift-marketplace(d498e96f-290e-4bd5-a6dd-5cdbbd016a17): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:51:40 crc kubenswrapper[4941]: E0227 19:51:40.420848 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" Feb 27 19:51:40 crc kubenswrapper[4941]: E0227 19:51:40.468545 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:51:43 crc kubenswrapper[4941]: E0227 19:51:43.468567 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:51:43 crc kubenswrapper[4941]: I0227 19:51:43.577780 4941 scope.go:117] "RemoveContainer" containerID="3f93780e7ac08dfb661b9a007b0b1a57dd8ca57fbb2d074ff41a7128f4073c3b" Feb 27 19:51:43 crc kubenswrapper[4941]: I0227 19:51:43.615738 4941 scope.go:117] "RemoveContainer" containerID="ccc47308f9e15270871f9b67f98348dcb56ae4139ff6aabd071e53646b179c4f" Feb 27 19:51:43 crc kubenswrapper[4941]: I0227 19:51:43.646813 4941 scope.go:117] "RemoveContainer" containerID="dda3a1254eb092cd3d656052b4c78dbb7f7dbdea2538595d582368ece6994da7" Feb 27 19:51:44 crc kubenswrapper[4941]: E0227 19:51:44.470081 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:51:54 crc kubenswrapper[4941]: E0227 19:51:54.471221 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" Feb 27 19:51:55 crc kubenswrapper[4941]: E0227 19:51:55.150282 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:51:55 crc kubenswrapper[4941]: E0227 19:51:55.150538 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp7nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vkqp2_openshift-marketplace(048b2614-045b-4bed-89ef-8554c574f3e6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:51:55 crc kubenswrapper[4941]: E0227 19:51:55.151845 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:51:55 crc kubenswrapper[4941]: E0227 19:51:55.469848 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:51:56 crc kubenswrapper[4941]: E0227 19:51:56.469537 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.137176 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537032-pcvsw"] Feb 27 19:52:00 crc kubenswrapper[4941]: E0227 19:52:00.137724 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" containerName="oc" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.137738 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" containerName="oc" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.137847 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" containerName="oc" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.138245 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.140553 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-dmspt" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.140599 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.140683 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.147140 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537032-pcvsw"] Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.319854 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vz4d\" (UniqueName: \"kubernetes.io/projected/0a787d75-f193-4f58-833b-337e41627a9d-kube-api-access-6vz4d\") pod \"auto-csr-approver-29537032-pcvsw\" (UID: \"0a787d75-f193-4f58-833b-337e41627a9d\") " pod="openshift-infra/auto-csr-approver-29537032-pcvsw" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.421446 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vz4d\" (UniqueName: \"kubernetes.io/projected/0a787d75-f193-4f58-833b-337e41627a9d-kube-api-access-6vz4d\") pod \"auto-csr-approver-29537032-pcvsw\" (UID: \"0a787d75-f193-4f58-833b-337e41627a9d\") " pod="openshift-infra/auto-csr-approver-29537032-pcvsw" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.445325 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vz4d\" (UniqueName: \"kubernetes.io/projected/0a787d75-f193-4f58-833b-337e41627a9d-kube-api-access-6vz4d\") pod \"auto-csr-approver-29537032-pcvsw\" (UID: \"0a787d75-f193-4f58-833b-337e41627a9d\") " pod="openshift-infra/auto-csr-approver-29537032-pcvsw" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.459290 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" Feb 27 19:52:00 crc kubenswrapper[4941]: I0227 19:52:00.914818 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537032-pcvsw"] Feb 27 19:52:00 crc kubenswrapper[4941]: W0227 19:52:00.926535 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a787d75_f193_4f58_833b_337e41627a9d.slice/crio-b9d3bd12c716cba932ddd0de0cc30954fb2a8d1576184377cf76d6ffdebff74a WatchSource:0}: Error finding container b9d3bd12c716cba932ddd0de0cc30954fb2a8d1576184377cf76d6ffdebff74a: Status 404 returned error can't find the container with id b9d3bd12c716cba932ddd0de0cc30954fb2a8d1576184377cf76d6ffdebff74a Feb 27 19:52:01 crc kubenswrapper[4941]: I0227 19:52:01.275191 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" event={"ID":"0a787d75-f193-4f58-833b-337e41627a9d","Type":"ContainerStarted","Data":"b9d3bd12c716cba932ddd0de0cc30954fb2a8d1576184377cf76d6ffdebff74a"} Feb 27 19:52:02 crc kubenswrapper[4941]: E0227 19:52:02.108077 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:52:02 crc kubenswrapper[4941]: E0227 19:52:02.108565 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:52:02 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:52:02 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vz4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537032-pcvsw_openshift-infra(0a787d75-f193-4f58-833b-337e41627a9d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:52:02 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:52:02 crc kubenswrapper[4941]: E0227 19:52:02.109745 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:52:02 crc kubenswrapper[4941]: E0227 19:52:02.283106 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:52:06 crc kubenswrapper[4941]: E0227 19:52:06.131950 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:52:06 crc kubenswrapper[4941]: E0227 19:52:06.132445 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62htp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx9cv_openshift-marketplace(d498e96f-290e-4bd5-a6dd-5cdbbd016a17): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:52:06 crc kubenswrapper[4941]: E0227 19:52:06.133736 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" Feb 27 19:52:08 crc kubenswrapper[4941]: E0227 19:52:08.471024 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:52:08 crc kubenswrapper[4941]: E0227 19:52:08.472619 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:52:11 crc kubenswrapper[4941]: E0227 19:52:11.010177 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:52:11 crc kubenswrapper[4941]: E0227 19:52:11.010371 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bndzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zs7bf_openshift-marketplace(1737ca02-aded-4254-b433-aac4a9ccad71): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:52:11 crc kubenswrapper[4941]: E0227 19:52:11.011610 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:52:14 crc kubenswrapper[4941]: E0227 19:52:14.713562 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:52:14 crc kubenswrapper[4941]: E0227 19:52:14.714143 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:52:14 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:52:14 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vz4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537032-pcvsw_openshift-infra(0a787d75-f193-4f58-833b-337e41627a9d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:52:14 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:52:14 crc kubenswrapper[4941]: E0227 19:52:14.715464 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:52:20 crc kubenswrapper[4941]: E0227 19:52:20.133971 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:52:20 crc kubenswrapper[4941]: E0227 19:52:20.134438 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t46t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ttm4p_openshift-marketplace(8d15d663-67b1-48e2-8e06-c4c27858e991): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:52:20 crc kubenswrapper[4941]: E0227 19:52:20.135700 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:52:20 crc kubenswrapper[4941]: E0227 19:52:20.469813 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" Feb 27 19:52:21 crc kubenswrapper[4941]: E0227 19:52:21.469686 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:52:24 crc kubenswrapper[4941]: E0227 19:52:24.468884 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:52:29 crc kubenswrapper[4941]: E0227 19:52:29.469827 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:52:29 crc kubenswrapper[4941]: I0227 19:52:29.851119 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:52:29 crc kubenswrapper[4941]: I0227 19:52:29.851194 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:52:33 crc kubenswrapper[4941]: E0227 19:52:33.469838 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:52:34 crc kubenswrapper[4941]: E0227 19:52:34.468744 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:52:35 crc kubenswrapper[4941]: E0227 19:52:35.469890 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" Feb 27 19:52:38 crc kubenswrapper[4941]: E0227 19:52:38.471044 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:52:44 crc kubenswrapper[4941]: E0227 19:52:44.304575 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:52:44 crc kubenswrapper[4941]: E0227 19:52:44.305165 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:52:44 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:52:44 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vz4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537032-pcvsw_openshift-infra(0a787d75-f193-4f58-833b-337e41627a9d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:52:44 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:52:44 crc kubenswrapper[4941]: E0227 19:52:44.306395 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:52:44 crc kubenswrapper[4941]: E0227 19:52:44.469663 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:52:49 crc kubenswrapper[4941]: E0227 19:52:49.471988 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:52:50 crc kubenswrapper[4941]: E0227 19:52:50.467529 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:52:50 crc kubenswrapper[4941]: I0227 19:52:50.620079 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx9cv" event={"ID":"d498e96f-290e-4bd5-a6dd-5cdbbd016a17","Type":"ContainerStarted","Data":"0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc"} Feb 27 19:52:51 crc kubenswrapper[4941]: I0227 19:52:51.628972 4941 generic.go:334] "Generic (PLEG): container finished" podID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerID="0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc" exitCode=0 Feb 27 19:52:51 crc kubenswrapper[4941]: I0227 19:52:51.629045 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx9cv" event={"ID":"d498e96f-290e-4bd5-a6dd-5cdbbd016a17","Type":"ContainerDied","Data":"0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc"} Feb 27 19:52:52 crc kubenswrapper[4941]: I0227 19:52:52.635401 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx9cv" event={"ID":"d498e96f-290e-4bd5-a6dd-5cdbbd016a17","Type":"ContainerStarted","Data":"9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4"} Feb 27 19:52:52 crc kubenswrapper[4941]: I0227 19:52:52.653206 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sx9cv" podStartSLOduration=2.6592105029999997 podStartE2EDuration="1m30.65318851s" podCreationTimestamp="2026-02-27 19:51:22 +0000 UTC" firstStartedPulling="2026-02-27 19:51:24.027821563 +0000 UTC m=+1002.288962003" lastFinishedPulling="2026-02-27 19:52:52.02179956 +0000 UTC m=+1090.282940010" observedRunningTime="2026-02-27 19:52:52.64927205 +0000 UTC m=+1090.910412490" watchObservedRunningTime="2026-02-27 19:52:52.65318851 +0000 UTC m=+1090.914328930" Feb 27 19:52:53 crc kubenswrapper[4941]: I0227 19:52:53.327639 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:52:53 crc kubenswrapper[4941]: I0227 19:52:53.327699 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:52:54 crc kubenswrapper[4941]: I0227 19:52:54.387872 4941 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="registry-server" probeResult="failure" output=< Feb 27 19:52:54 crc kubenswrapper[4941]: timeout: failed to connect service ":50051" within 1s Feb 27 19:52:54 crc kubenswrapper[4941]: > Feb 27 19:52:58 crc kubenswrapper[4941]: E0227 19:52:58.470909 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:52:59 crc kubenswrapper[4941]: E0227 19:52:59.469032 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:52:59 crc kubenswrapper[4941]: I0227 19:52:59.850861 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:52:59 crc kubenswrapper[4941]: I0227 19:52:59.850993 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:53:02 crc kubenswrapper[4941]: E0227 19:53:02.472825 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:53:03 crc kubenswrapper[4941]: I0227 19:53:03.395091 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:53:03 crc kubenswrapper[4941]: I0227 19:53:03.458498 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:53:03 crc kubenswrapper[4941]: I0227 19:53:03.947245 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sx9cv"] Feb 27 19:53:04 crc kubenswrapper[4941]: E0227 19:53:04.470575 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:53:04 crc kubenswrapper[4941]: I0227 19:53:04.713537 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sx9cv" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="registry-server" containerID="cri-o://9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4" gracePeriod=2 Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.110625 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.298623 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-catalog-content\") pod \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.298760 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-utilities\") pod \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.299623 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62htp\" (UniqueName: \"kubernetes.io/projected/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-kube-api-access-62htp\") pod \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\" (UID: \"d498e96f-290e-4bd5-a6dd-5cdbbd016a17\") " Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.300364 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-utilities" (OuterVolumeSpecName: "utilities") pod "d498e96f-290e-4bd5-a6dd-5cdbbd016a17" (UID: "d498e96f-290e-4bd5-a6dd-5cdbbd016a17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.304086 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-kube-api-access-62htp" (OuterVolumeSpecName: "kube-api-access-62htp") pod "d498e96f-290e-4bd5-a6dd-5cdbbd016a17" (UID: "d498e96f-290e-4bd5-a6dd-5cdbbd016a17"). InnerVolumeSpecName "kube-api-access-62htp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.400892 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.400923 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62htp\" (UniqueName: \"kubernetes.io/projected/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-kube-api-access-62htp\") on node \"crc\" DevicePath \"\"" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.427168 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d498e96f-290e-4bd5-a6dd-5cdbbd016a17" (UID: "d498e96f-290e-4bd5-a6dd-5cdbbd016a17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.501994 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d498e96f-290e-4bd5-a6dd-5cdbbd016a17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.725629 4941 generic.go:334] "Generic (PLEG): container finished" podID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerID="9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4" exitCode=0 Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.725689 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx9cv" event={"ID":"d498e96f-290e-4bd5-a6dd-5cdbbd016a17","Type":"ContainerDied","Data":"9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4"} Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.725731 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx9cv" event={"ID":"d498e96f-290e-4bd5-a6dd-5cdbbd016a17","Type":"ContainerDied","Data":"c23e0e6f88b915944b93d573529c245032c9107554034a29b846256dae52c26f"} Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.725739 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx9cv" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.725765 4941 scope.go:117] "RemoveContainer" containerID="9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.758808 4941 scope.go:117] "RemoveContainer" containerID="0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.795061 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sx9cv"] Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.801335 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sx9cv"] Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.814098 4941 scope.go:117] "RemoveContainer" containerID="bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.837847 4941 scope.go:117] "RemoveContainer" containerID="9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4" Feb 27 19:53:05 crc kubenswrapper[4941]: E0227 19:53:05.838498 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4\": container with ID starting with 9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4 not found: ID does not exist" containerID="9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.838545 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4"} err="failed to get container status \"9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4\": rpc error: code = NotFound desc = could not find container \"9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4\": container with ID starting with 9f36003a8107bc57bd217bb4bac49e2ed6cd6b570abb50a02b0f1a2bfd5ebeb4 not found: ID does not exist" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.838575 4941 scope.go:117] "RemoveContainer" containerID="0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc" Feb 27 19:53:05 crc kubenswrapper[4941]: E0227 19:53:05.839040 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc\": container with ID starting with 0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc not found: ID does not exist" containerID="0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.839068 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc"} err="failed to get container status \"0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc\": rpc error: code = NotFound desc = could not find container \"0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc\": container with ID starting with 0e01a09fb51389364b3c8179d43dcd21dd3dfca0d742bd4dc291b31b141c8adc not found: ID does not exist" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.839085 4941 scope.go:117] "RemoveContainer" containerID="bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a" Feb 27 19:53:05 crc kubenswrapper[4941]: E0227 19:53:05.839552 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a\": container with ID starting with bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a not found: ID does not exist" containerID="bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a" Feb 27 19:53:05 crc kubenswrapper[4941]: I0227 19:53:05.839576 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a"} err="failed to get container status \"bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a\": rpc error: code = NotFound desc = could not find container \"bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a\": container with ID starting with bf533649561b76d66e0053203906a0429f582e5850a5b60bb1acf3c7aed4ff4a not found: ID does not exist" Feb 27 19:53:06 crc kubenswrapper[4941]: I0227 19:53:06.479016 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" path="/var/lib/kubelet/pods/d498e96f-290e-4bd5-a6dd-5cdbbd016a17/volumes" Feb 27 19:53:10 crc kubenswrapper[4941]: E0227 19:53:10.471512 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:53:10 crc kubenswrapper[4941]: E0227 19:53:10.471569 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:53:13 crc kubenswrapper[4941]: E0227 19:53:13.469050 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:53:18 crc kubenswrapper[4941]: E0227 19:53:18.471099 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:53:22 crc kubenswrapper[4941]: E0227 19:53:22.474865 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:53:25 crc kubenswrapper[4941]: E0227 19:53:25.470614 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:53:27 crc kubenswrapper[4941]: E0227 19:53:27.469749 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:53:29 crc kubenswrapper[4941]: E0227 19:53:29.469257 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:53:29 crc kubenswrapper[4941]: I0227 19:53:29.851618 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:53:29 crc kubenswrapper[4941]: I0227 19:53:29.851994 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:53:29 crc kubenswrapper[4941]: I0227 19:53:29.852060 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:53:29 crc kubenswrapper[4941]: I0227 19:53:29.852879 4941 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bebc3d1d72c88a0e2bf6e3cf3f2644b8b14415b97b25e73e4e4239960c6af6a7"} pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:53:29 crc kubenswrapper[4941]: I0227 19:53:29.852982 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" containerID="cri-o://bebc3d1d72c88a0e2bf6e3cf3f2644b8b14415b97b25e73e4e4239960c6af6a7" gracePeriod=600 Feb 27 19:53:30 crc kubenswrapper[4941]: I0227 19:53:30.889300 4941 generic.go:334] "Generic (PLEG): container finished" podID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerID="bebc3d1d72c88a0e2bf6e3cf3f2644b8b14415b97b25e73e4e4239960c6af6a7" exitCode=0 Feb 27 19:53:30 crc kubenswrapper[4941]: I0227 19:53:30.889343 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerDied","Data":"bebc3d1d72c88a0e2bf6e3cf3f2644b8b14415b97b25e73e4e4239960c6af6a7"} Feb 27 19:53:30 crc kubenswrapper[4941]: I0227 19:53:30.889617 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"9506b549ef944c19a5636653306998dd2e6029783108d9d1a06f5aa2e159a13a"} Feb 27 19:53:30 crc kubenswrapper[4941]: I0227 19:53:30.889646 4941 scope.go:117] "RemoveContainer" containerID="7980c890302f1228f0f62409ae6b09a7950440b361f4647d2f4be6a242445589" Feb 27 19:53:35 crc kubenswrapper[4941]: I0227 19:53:35.469814 4941 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:53:36 crc kubenswrapper[4941]: E0227 19:53:36.346888 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:53:36 crc kubenswrapper[4941]: E0227 19:53:36.347401 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:53:36 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:53:36 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vz4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537032-pcvsw_openshift-infra(0a787d75-f193-4f58-833b-337e41627a9d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:53:36 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:53:36 crc kubenswrapper[4941]: E0227 19:53:36.348808 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:53:37 crc kubenswrapper[4941]: E0227 19:53:37.469355 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:53:38 crc kubenswrapper[4941]: E0227 19:53:38.468916 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:53:40 crc kubenswrapper[4941]: E0227 19:53:40.468144 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:53:49 crc kubenswrapper[4941]: E0227 19:53:49.469536 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:53:51 crc kubenswrapper[4941]: E0227 19:53:51.469497 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:53:52 crc kubenswrapper[4941]: E0227 19:53:52.054412 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:53:52 crc kubenswrapper[4941]: E0227 19:53:52.054645 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t46t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ttm4p_openshift-marketplace(8d15d663-67b1-48e2-8e06-c4c27858e991): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:53:52 crc kubenswrapper[4941]: E0227 19:53:52.056233 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:53:53 crc kubenswrapper[4941]: E0227 19:53:53.469891 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.139005 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537034-6hrpc"] Feb 27 19:54:00 crc kubenswrapper[4941]: E0227 19:54:00.139737 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="registry-server" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.139752 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="registry-server" Feb 27 19:54:00 crc kubenswrapper[4941]: E0227 19:54:00.139770 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="extract-utilities" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.139778 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="extract-utilities" Feb 27 19:54:00 crc kubenswrapper[4941]: E0227 19:54:00.139793 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="extract-content" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.139801 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="extract-content" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.139912 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="d498e96f-290e-4bd5-a6dd-5cdbbd016a17" containerName="registry-server" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.141223 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.146396 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537034-6hrpc"] Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.281843 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6w9\" (UniqueName: \"kubernetes.io/projected/a91543e1-1473-4e7b-9b73-cac7cc6c36c1-kube-api-access-cl6w9\") pod \"auto-csr-approver-29537034-6hrpc\" (UID: \"a91543e1-1473-4e7b-9b73-cac7cc6c36c1\") " pod="openshift-infra/auto-csr-approver-29537034-6hrpc" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.383591 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl6w9\" (UniqueName: \"kubernetes.io/projected/a91543e1-1473-4e7b-9b73-cac7cc6c36c1-kube-api-access-cl6w9\") pod \"auto-csr-approver-29537034-6hrpc\" (UID: \"a91543e1-1473-4e7b-9b73-cac7cc6c36c1\") " pod="openshift-infra/auto-csr-approver-29537034-6hrpc" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.418572 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl6w9\" (UniqueName: \"kubernetes.io/projected/a91543e1-1473-4e7b-9b73-cac7cc6c36c1-kube-api-access-cl6w9\") pod \"auto-csr-approver-29537034-6hrpc\" (UID: \"a91543e1-1473-4e7b-9b73-cac7cc6c36c1\") " pod="openshift-infra/auto-csr-approver-29537034-6hrpc" Feb 27 19:54:00 crc kubenswrapper[4941]: E0227 19:54:00.470618 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.506898 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" Feb 27 19:54:00 crc kubenswrapper[4941]: W0227 19:54:00.901076 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda91543e1_1473_4e7b_9b73_cac7cc6c36c1.slice/crio-9159beba1e8913c5cca9a0cdb219ea1808e92e04a6f94c4034d26084bcfda84b WatchSource:0}: Error finding container 9159beba1e8913c5cca9a0cdb219ea1808e92e04a6f94c4034d26084bcfda84b: Status 404 returned error can't find the container with id 9159beba1e8913c5cca9a0cdb219ea1808e92e04a6f94c4034d26084bcfda84b Feb 27 19:54:00 crc kubenswrapper[4941]: I0227 19:54:00.913972 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537034-6hrpc"] Feb 27 19:54:01 crc kubenswrapper[4941]: I0227 19:54:01.088036 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" event={"ID":"a91543e1-1473-4e7b-9b73-cac7cc6c36c1","Type":"ContainerStarted","Data":"9159beba1e8913c5cca9a0cdb219ea1808e92e04a6f94c4034d26084bcfda84b"} Feb 27 19:54:01 crc kubenswrapper[4941]: E0227 19:54:01.931692 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:54:01 crc kubenswrapper[4941]: E0227 19:54:01.932074 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:54:01 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:54:01 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cl6w9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537034-6hrpc_openshift-infra(a91543e1-1473-4e7b-9b73-cac7cc6c36c1): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:54:01 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:54:01 crc kubenswrapper[4941]: E0227 19:54:01.933224 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" Feb 27 19:54:02 crc kubenswrapper[4941]: E0227 19:54:02.094314 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" Feb 27 19:54:03 crc kubenswrapper[4941]: E0227 19:54:03.468536 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:54:04 crc kubenswrapper[4941]: E0227 19:54:04.477199 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:54:07 crc kubenswrapper[4941]: E0227 19:54:07.469500 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:54:14 crc kubenswrapper[4941]: E0227 19:54:14.468058 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:54:17 crc kubenswrapper[4941]: E0227 19:54:17.468231 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:54:17 crc kubenswrapper[4941]: E0227 19:54:17.537871 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:54:17 crc kubenswrapper[4941]: E0227 19:54:17.538021 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:54:17 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:54:17 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cl6w9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537034-6hrpc_openshift-infra(a91543e1-1473-4e7b-9b73-cac7cc6c36c1): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:54:17 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:54:17 crc kubenswrapper[4941]: E0227 19:54:17.539264 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" Feb 27 19:54:18 crc kubenswrapper[4941]: E0227 19:54:18.469893 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:54:19 crc kubenswrapper[4941]: E0227 19:54:19.468417 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:54:26 crc kubenswrapper[4941]: E0227 19:54:26.470039 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:54:29 crc kubenswrapper[4941]: E0227 19:54:29.468184 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" Feb 27 19:54:30 crc kubenswrapper[4941]: E0227 19:54:30.469984 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:54:30 crc kubenswrapper[4941]: E0227 19:54:30.470073 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:54:31 crc kubenswrapper[4941]: E0227 19:54:31.468691 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:54:41 crc kubenswrapper[4941]: E0227 19:54:41.468766 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:54:41 crc kubenswrapper[4941]: E0227 19:54:41.717167 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:54:41 crc kubenswrapper[4941]: E0227 19:54:41.717428 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:54:41 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:54:41 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cl6w9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537034-6hrpc_openshift-infra(a91543e1-1473-4e7b-9b73-cac7cc6c36c1): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:54:41 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:54:41 crc kubenswrapper[4941]: E0227 19:54:41.718710 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" Feb 27 19:54:44 crc kubenswrapper[4941]: E0227 19:54:44.469253 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:54:45 crc kubenswrapper[4941]: E0227 19:54:45.469745 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:54:45 crc kubenswrapper[4941]: E0227 19:54:45.470239 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:54:53 crc kubenswrapper[4941]: E0227 19:54:53.470752 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" Feb 27 19:54:56 crc kubenswrapper[4941]: E0227 19:54:56.468513 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:54:57 crc kubenswrapper[4941]: E0227 19:54:57.469311 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:54:57 crc kubenswrapper[4941]: E0227 19:54:57.640538 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:54:57 crc kubenswrapper[4941]: E0227 19:54:57.640776 4941 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:54:57 crc kubenswrapper[4941]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:54:57 crc kubenswrapper[4941]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vz4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537032-pcvsw_openshift-infra(0a787d75-f193-4f58-833b-337e41627a9d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:54:57 crc kubenswrapper[4941]: > logger="UnhandledError" Feb 27 19:54:57 crc kubenswrapper[4941]: E0227 19:54:57.642162 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:54:59 crc kubenswrapper[4941]: E0227 19:54:59.469483 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:55:06 crc kubenswrapper[4941]: E0227 19:55:06.469652 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" Feb 27 19:55:09 crc kubenswrapper[4941]: E0227 19:55:09.469373 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:55:10 crc kubenswrapper[4941]: E0227 19:55:10.469104 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:55:12 crc kubenswrapper[4941]: E0227 19:55:12.475263 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:55:12 crc kubenswrapper[4941]: E0227 19:55:12.476222 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:55:17 crc kubenswrapper[4941]: E0227 19:55:17.469330 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" Feb 27 19:55:20 crc kubenswrapper[4941]: E0227 19:55:20.470320 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:55:24 crc kubenswrapper[4941]: E0227 19:55:24.469219 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:55:24 crc kubenswrapper[4941]: E0227 19:55:24.471064 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:55:25 crc kubenswrapper[4941]: E0227 19:55:25.470026 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:55:34 crc kubenswrapper[4941]: I0227 19:55:34.675774 4941 generic.go:334] "Generic (PLEG): container finished" podID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" containerID="01cc9e5710c2f5e249963a58017145cf7d458e2f591c77c9e61508dd5ebccf90" exitCode=0 Feb 27 19:55:34 crc kubenswrapper[4941]: I0227 19:55:34.675884 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" event={"ID":"a91543e1-1473-4e7b-9b73-cac7cc6c36c1","Type":"ContainerDied","Data":"01cc9e5710c2f5e249963a58017145cf7d458e2f591c77c9e61508dd5ebccf90"} Feb 27 19:55:35 crc kubenswrapper[4941]: E0227 19:55:35.469617 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:55:35 crc kubenswrapper[4941]: I0227 19:55:35.997055 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" Feb 27 19:55:36 crc kubenswrapper[4941]: I0227 19:55:36.045825 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl6w9\" (UniqueName: \"kubernetes.io/projected/a91543e1-1473-4e7b-9b73-cac7cc6c36c1-kube-api-access-cl6w9\") pod \"a91543e1-1473-4e7b-9b73-cac7cc6c36c1\" (UID: \"a91543e1-1473-4e7b-9b73-cac7cc6c36c1\") " Feb 27 19:55:36 crc kubenswrapper[4941]: I0227 19:55:36.052172 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91543e1-1473-4e7b-9b73-cac7cc6c36c1-kube-api-access-cl6w9" (OuterVolumeSpecName: "kube-api-access-cl6w9") pod "a91543e1-1473-4e7b-9b73-cac7cc6c36c1" (UID: "a91543e1-1473-4e7b-9b73-cac7cc6c36c1"). InnerVolumeSpecName "kube-api-access-cl6w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:55:36 crc kubenswrapper[4941]: I0227 19:55:36.147611 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl6w9\" (UniqueName: \"kubernetes.io/projected/a91543e1-1473-4e7b-9b73-cac7cc6c36c1-kube-api-access-cl6w9\") on node \"crc\" DevicePath \"\"" Feb 27 19:55:36 crc kubenswrapper[4941]: I0227 19:55:36.691680 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" event={"ID":"a91543e1-1473-4e7b-9b73-cac7cc6c36c1","Type":"ContainerDied","Data":"9159beba1e8913c5cca9a0cdb219ea1808e92e04a6f94c4034d26084bcfda84b"} Feb 27 19:55:36 crc kubenswrapper[4941]: I0227 19:55:36.691725 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9159beba1e8913c5cca9a0cdb219ea1808e92e04a6f94c4034d26084bcfda84b" Feb 27 19:55:36 crc kubenswrapper[4941]: I0227 19:55:36.691724 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537034-6hrpc" Feb 27 19:55:37 crc kubenswrapper[4941]: I0227 19:55:37.062058 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537026-whvl4"] Feb 27 19:55:37 crc kubenswrapper[4941]: I0227 19:55:37.066113 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537026-whvl4"] Feb 27 19:55:37 crc kubenswrapper[4941]: E0227 19:55:37.469602 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:55:38 crc kubenswrapper[4941]: E0227 19:55:38.469443 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:55:38 crc kubenswrapper[4941]: I0227 19:55:38.478506 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ae0dbf-9bfb-4038-98a5-1fc39572c5b4" path="/var/lib/kubelet/pods/30ae0dbf-9bfb-4038-98a5-1fc39572c5b4/volumes" Feb 27 19:55:40 crc kubenswrapper[4941]: E0227 19:55:40.470340 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:55:43 crc kubenswrapper[4941]: I0227 19:55:43.766044 4941 scope.go:117] "RemoveContainer" containerID="fa25ce9486830c2a08037587a6a7bf82ff0c027c1328886ab4ddb89b97577d6e" Feb 27 19:55:47 crc kubenswrapper[4941]: E0227 19:55:47.468501 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:55:51 crc kubenswrapper[4941]: E0227 19:55:51.471288 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:55:52 crc kubenswrapper[4941]: E0227 19:55:52.474689 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:55:52 crc kubenswrapper[4941]: E0227 19:55:52.474860 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:55:59 crc kubenswrapper[4941]: E0227 19:55:59.469917 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:55:59 crc kubenswrapper[4941]: I0227 19:55:59.851261 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:55:59 crc kubenswrapper[4941]: I0227 19:55:59.851360 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.140373 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537036-dpxbt"] Feb 27 19:56:00 crc kubenswrapper[4941]: E0227 19:56:00.140806 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" containerName="oc" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.140850 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" containerName="oc" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.141263 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" containerName="oc" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.141987 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537036-dpxbt" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.158035 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537036-dpxbt"] Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.203540 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxlnj\" (UniqueName: \"kubernetes.io/projected/42d2e8aa-740b-468b-a40b-9aa0fefc760c-kube-api-access-sxlnj\") pod \"auto-csr-approver-29537036-dpxbt\" (UID: \"42d2e8aa-740b-468b-a40b-9aa0fefc760c\") " pod="openshift-infra/auto-csr-approver-29537036-dpxbt" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.305537 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxlnj\" (UniqueName: \"kubernetes.io/projected/42d2e8aa-740b-468b-a40b-9aa0fefc760c-kube-api-access-sxlnj\") pod \"auto-csr-approver-29537036-dpxbt\" (UID: \"42d2e8aa-740b-468b-a40b-9aa0fefc760c\") " pod="openshift-infra/auto-csr-approver-29537036-dpxbt" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.323445 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxlnj\" (UniqueName: \"kubernetes.io/projected/42d2e8aa-740b-468b-a40b-9aa0fefc760c-kube-api-access-sxlnj\") pod \"auto-csr-approver-29537036-dpxbt\" (UID: \"42d2e8aa-740b-468b-a40b-9aa0fefc760c\") " pod="openshift-infra/auto-csr-approver-29537036-dpxbt" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.459094 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537036-dpxbt" Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.641623 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537036-dpxbt"] Feb 27 19:56:00 crc kubenswrapper[4941]: I0227 19:56:00.874654 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537036-dpxbt" event={"ID":"42d2e8aa-740b-468b-a40b-9aa0fefc760c","Type":"ContainerStarted","Data":"696f8cf9c0af23cf44521c236dada4fb44bd3fa6eb1df5861351f49f2c14fa3c"} Feb 27 19:56:02 crc kubenswrapper[4941]: I0227 19:56:02.889773 4941 generic.go:334] "Generic (PLEG): container finished" podID="42d2e8aa-740b-468b-a40b-9aa0fefc760c" containerID="dd0153d1954e0d4f7b2f0f996dc11de8d0fd1f5d06fee57e808a66165740f1fa" exitCode=0 Feb 27 19:56:02 crc kubenswrapper[4941]: I0227 19:56:02.889888 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537036-dpxbt" event={"ID":"42d2e8aa-740b-468b-a40b-9aa0fefc760c","Type":"ContainerDied","Data":"dd0153d1954e0d4f7b2f0f996dc11de8d0fd1f5d06fee57e808a66165740f1fa"} Feb 27 19:56:04 crc kubenswrapper[4941]: I0227 19:56:04.107010 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537036-dpxbt" Feb 27 19:56:04 crc kubenswrapper[4941]: I0227 19:56:04.156846 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxlnj\" (UniqueName: \"kubernetes.io/projected/42d2e8aa-740b-468b-a40b-9aa0fefc760c-kube-api-access-sxlnj\") pod \"42d2e8aa-740b-468b-a40b-9aa0fefc760c\" (UID: \"42d2e8aa-740b-468b-a40b-9aa0fefc760c\") " Feb 27 19:56:04 crc kubenswrapper[4941]: I0227 19:56:04.162744 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d2e8aa-740b-468b-a40b-9aa0fefc760c-kube-api-access-sxlnj" (OuterVolumeSpecName: "kube-api-access-sxlnj") pod "42d2e8aa-740b-468b-a40b-9aa0fefc760c" (UID: "42d2e8aa-740b-468b-a40b-9aa0fefc760c"). InnerVolumeSpecName "kube-api-access-sxlnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:56:04 crc kubenswrapper[4941]: I0227 19:56:04.258196 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxlnj\" (UniqueName: \"kubernetes.io/projected/42d2e8aa-740b-468b-a40b-9aa0fefc760c-kube-api-access-sxlnj\") on node \"crc\" DevicePath \"\"" Feb 27 19:56:04 crc kubenswrapper[4941]: E0227 19:56:04.470161 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:56:04 crc kubenswrapper[4941]: E0227 19:56:04.471987 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:56:04 crc kubenswrapper[4941]: I0227 19:56:04.903339 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537036-dpxbt" event={"ID":"42d2e8aa-740b-468b-a40b-9aa0fefc760c","Type":"ContainerDied","Data":"696f8cf9c0af23cf44521c236dada4fb44bd3fa6eb1df5861351f49f2c14fa3c"} Feb 27 19:56:04 crc kubenswrapper[4941]: I0227 19:56:04.903375 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="696f8cf9c0af23cf44521c236dada4fb44bd3fa6eb1df5861351f49f2c14fa3c" Feb 27 19:56:04 crc kubenswrapper[4941]: I0227 19:56:04.903511 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537036-dpxbt" Feb 27 19:56:05 crc kubenswrapper[4941]: I0227 19:56:05.173282 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537028-bfg7w"] Feb 27 19:56:05 crc kubenswrapper[4941]: I0227 19:56:05.181729 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537028-bfg7w"] Feb 27 19:56:05 crc kubenswrapper[4941]: E0227 19:56:05.469851 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:56:06 crc kubenswrapper[4941]: I0227 19:56:06.476189 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a332721-1e6a-4f6b-a4ff-a0943263f545" path="/var/lib/kubelet/pods/4a332721-1e6a-4f6b-a4ff-a0943263f545/volumes" Feb 27 19:56:11 crc kubenswrapper[4941]: E0227 19:56:11.471147 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:56:15 crc kubenswrapper[4941]: E0227 19:56:15.469274 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:56:16 crc kubenswrapper[4941]: E0227 19:56:16.469707 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:56:16 crc kubenswrapper[4941]: E0227 19:56:16.469855 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:56:25 crc kubenswrapper[4941]: E0227 19:56:25.470278 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:56:26 crc kubenswrapper[4941]: E0227 19:56:26.469490 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:56:29 crc kubenswrapper[4941]: I0227 19:56:29.851982 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:56:29 crc kubenswrapper[4941]: I0227 19:56:29.852424 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:56:30 crc kubenswrapper[4941]: E0227 19:56:30.469503 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:56:31 crc kubenswrapper[4941]: E0227 19:56:31.468413 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:56:37 crc kubenswrapper[4941]: E0227 19:56:37.469069 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:56:37 crc kubenswrapper[4941]: E0227 19:56:37.469680 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:56:43 crc kubenswrapper[4941]: E0227 19:56:43.469565 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:56:46 crc kubenswrapper[4941]: E0227 19:56:46.241564 4941 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:56:46 crc kubenswrapper[4941]: E0227 19:56:46.241990 4941 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t46t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ttm4p_openshift-marketplace(8d15d663-67b1-48e2-8e06-c4c27858e991): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:56:46 crc kubenswrapper[4941]: E0227 19:56:46.243319 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:56:50 crc kubenswrapper[4941]: E0227 19:56:50.472447 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:56:51 crc kubenswrapper[4941]: E0227 19:56:51.468558 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkqp2" podUID="048b2614-045b-4bed-89ef-8554c574f3e6" Feb 27 19:56:54 crc kubenswrapper[4941]: E0227 19:56:54.469872 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:56:57 crc kubenswrapper[4941]: E0227 19:56:57.469969 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:56:59 crc kubenswrapper[4941]: I0227 19:56:59.851720 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:56:59 crc kubenswrapper[4941]: I0227 19:56:59.852263 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:56:59 crc kubenswrapper[4941]: I0227 19:56:59.852354 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 19:56:59 crc kubenswrapper[4941]: I0227 19:56:59.853412 4941 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9506b549ef944c19a5636653306998dd2e6029783108d9d1a06f5aa2e159a13a"} pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:56:59 crc kubenswrapper[4941]: I0227 19:56:59.853577 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" containerID="cri-o://9506b549ef944c19a5636653306998dd2e6029783108d9d1a06f5aa2e159a13a" gracePeriod=600 Feb 27 19:57:00 crc kubenswrapper[4941]: I0227 19:57:00.273377 4941 generic.go:334] "Generic (PLEG): container finished" podID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerID="9506b549ef944c19a5636653306998dd2e6029783108d9d1a06f5aa2e159a13a" exitCode=0 Feb 27 19:57:00 crc kubenswrapper[4941]: I0227 19:57:00.273543 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerDied","Data":"9506b549ef944c19a5636653306998dd2e6029783108d9d1a06f5aa2e159a13a"} Feb 27 19:57:00 crc kubenswrapper[4941]: I0227 19:57:00.274001 4941 scope.go:117] "RemoveContainer" containerID="bebc3d1d72c88a0e2bf6e3cf3f2644b8b14415b97b25e73e4e4239960c6af6a7" Feb 27 19:57:00 crc kubenswrapper[4941]: I0227 19:57:00.273708 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"9eec6cc7c7712faabdd73b1b58c5b2509bfe9aa7a15f854471946d93d729953a"} Feb 27 19:57:03 crc kubenswrapper[4941]: E0227 19:57:03.469183 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zs7bf" podUID="1737ca02-aded-4254-b433-aac4a9ccad71" Feb 27 19:57:05 crc kubenswrapper[4941]: I0227 19:57:05.310923 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkqp2" event={"ID":"048b2614-045b-4bed-89ef-8554c574f3e6","Type":"ContainerStarted","Data":"97d2f7002eb31bf3e0f92ce7d8dd1ef1ddcb3fe4e64f4f4beee42887190d1934"} Feb 27 19:57:06 crc kubenswrapper[4941]: I0227 19:57:06.320941 4941 generic.go:334] "Generic (PLEG): container finished" podID="048b2614-045b-4bed-89ef-8554c574f3e6" containerID="97d2f7002eb31bf3e0f92ce7d8dd1ef1ddcb3fe4e64f4f4beee42887190d1934" exitCode=0 Feb 27 19:57:06 crc kubenswrapper[4941]: I0227 19:57:06.321058 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkqp2" event={"ID":"048b2614-045b-4bed-89ef-8554c574f3e6","Type":"ContainerDied","Data":"97d2f7002eb31bf3e0f92ce7d8dd1ef1ddcb3fe4e64f4f4beee42887190d1934"} Feb 27 19:57:07 crc kubenswrapper[4941]: I0227 19:57:07.331446 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkqp2" event={"ID":"048b2614-045b-4bed-89ef-8554c574f3e6","Type":"ContainerStarted","Data":"3dffcb9d29a3d455c1b209e14fd722b7b88c04c37d890ed3efa1125ab1c05f22"} Feb 27 19:57:07 crc kubenswrapper[4941]: I0227 19:57:07.354426 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkqp2" podStartSLOduration=2.772749129 podStartE2EDuration="16m3.354405816s" podCreationTimestamp="2026-02-27 19:41:04 +0000 UTC" firstStartedPulling="2026-02-27 19:41:06.22534589 +0000 UTC m=+384.486486320" lastFinishedPulling="2026-02-27 19:57:06.807002557 +0000 UTC m=+1345.068143007" observedRunningTime="2026-02-27 19:57:07.351506986 +0000 UTC m=+1345.612647436" watchObservedRunningTime="2026-02-27 19:57:07.354405816 +0000 UTC m=+1345.615546246" Feb 27 19:57:08 crc kubenswrapper[4941]: E0227 19:57:08.470248 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:57:10 crc kubenswrapper[4941]: E0227 19:57:10.470605 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:57:15 crc kubenswrapper[4941]: I0227 19:57:15.004895 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:57:15 crc kubenswrapper[4941]: I0227 19:57:15.005245 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:57:15 crc kubenswrapper[4941]: I0227 19:57:15.053748 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:57:15 crc kubenswrapper[4941]: I0227 19:57:15.440953 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vkqp2" Feb 27 19:57:17 crc kubenswrapper[4941]: I0227 19:57:17.396354 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7bf" event={"ID":"1737ca02-aded-4254-b433-aac4a9ccad71","Type":"ContainerStarted","Data":"171eeac85e8fb6fff70d3c4886156c5aa0fbd995c3db43de330a133b0232ea87"} Feb 27 19:57:18 crc kubenswrapper[4941]: I0227 19:57:18.409605 4941 generic.go:334] "Generic (PLEG): container finished" podID="1737ca02-aded-4254-b433-aac4a9ccad71" containerID="171eeac85e8fb6fff70d3c4886156c5aa0fbd995c3db43de330a133b0232ea87" exitCode=0 Feb 27 19:57:18 crc kubenswrapper[4941]: I0227 19:57:18.409659 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7bf" event={"ID":"1737ca02-aded-4254-b433-aac4a9ccad71","Type":"ContainerDied","Data":"171eeac85e8fb6fff70d3c4886156c5aa0fbd995c3db43de330a133b0232ea87"} Feb 27 19:57:19 crc kubenswrapper[4941]: I0227 19:57:19.436416 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7bf" event={"ID":"1737ca02-aded-4254-b433-aac4a9ccad71","Type":"ContainerStarted","Data":"5899b5e1b73536a0913b53a598812e8368b436980c6e07835cd3ac073e2aeb63"} Feb 27 19:57:19 crc kubenswrapper[4941]: I0227 19:57:19.469650 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zs7bf" podStartSLOduration=1.8699832760000001 podStartE2EDuration="16m11.469629416s" podCreationTimestamp="2026-02-27 19:41:08 +0000 UTC" firstStartedPulling="2026-02-27 19:41:09.248169854 +0000 UTC m=+387.509310294" lastFinishedPulling="2026-02-27 19:57:18.847815984 +0000 UTC m=+1357.108956434" observedRunningTime="2026-02-27 19:57:19.462071387 +0000 UTC m=+1357.723211847" watchObservedRunningTime="2026-02-27 19:57:19.469629416 +0000 UTC m=+1357.730769846" Feb 27 19:57:19 crc kubenswrapper[4941]: E0227 19:57:19.470099 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:57:25 crc kubenswrapper[4941]: E0227 19:57:25.470170 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:57:28 crc kubenswrapper[4941]: I0227 19:57:28.398642 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:57:28 crc kubenswrapper[4941]: I0227 19:57:28.399039 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:57:28 crc kubenswrapper[4941]: I0227 19:57:28.464635 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:57:28 crc kubenswrapper[4941]: I0227 19:57:28.589381 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zs7bf" Feb 27 19:57:32 crc kubenswrapper[4941]: E0227 19:57:32.473095 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" podUID="0a787d75-f193-4f58-833b-337e41627a9d" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.572262 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bxr82/must-gather-b5bbv"] Feb 27 19:57:32 crc kubenswrapper[4941]: E0227 19:57:32.572496 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d2e8aa-740b-468b-a40b-9aa0fefc760c" containerName="oc" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.572526 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d2e8aa-740b-468b-a40b-9aa0fefc760c" containerName="oc" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.572649 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d2e8aa-740b-468b-a40b-9aa0fefc760c" containerName="oc" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.573303 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.575281 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bxr82"/"kube-root-ca.crt" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.575585 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bxr82"/"openshift-service-ca.crt" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.634657 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bxr82/must-gather-b5bbv"] Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.667835 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtf8z\" (UniqueName: \"kubernetes.io/projected/0eb6568b-e1db-40af-9a57-f44aa47fd106-kube-api-access-wtf8z\") pod \"must-gather-b5bbv\" (UID: \"0eb6568b-e1db-40af-9a57-f44aa47fd106\") " pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.667907 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0eb6568b-e1db-40af-9a57-f44aa47fd106-must-gather-output\") pod \"must-gather-b5bbv\" (UID: \"0eb6568b-e1db-40af-9a57-f44aa47fd106\") " pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.769572 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtf8z\" (UniqueName: \"kubernetes.io/projected/0eb6568b-e1db-40af-9a57-f44aa47fd106-kube-api-access-wtf8z\") pod \"must-gather-b5bbv\" (UID: \"0eb6568b-e1db-40af-9a57-f44aa47fd106\") " pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.769663 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0eb6568b-e1db-40af-9a57-f44aa47fd106-must-gather-output\") pod \"must-gather-b5bbv\" (UID: \"0eb6568b-e1db-40af-9a57-f44aa47fd106\") " pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.770198 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0eb6568b-e1db-40af-9a57-f44aa47fd106-must-gather-output\") pod \"must-gather-b5bbv\" (UID: \"0eb6568b-e1db-40af-9a57-f44aa47fd106\") " pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.793143 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtf8z\" (UniqueName: \"kubernetes.io/projected/0eb6568b-e1db-40af-9a57-f44aa47fd106-kube-api-access-wtf8z\") pod \"must-gather-b5bbv\" (UID: \"0eb6568b-e1db-40af-9a57-f44aa47fd106\") " pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:57:32 crc kubenswrapper[4941]: I0227 19:57:32.899519 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:57:33 crc kubenswrapper[4941]: I0227 19:57:33.149702 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bxr82/must-gather-b5bbv"] Feb 27 19:57:33 crc kubenswrapper[4941]: W0227 19:57:33.152492 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb6568b_e1db_40af_9a57_f44aa47fd106.slice/crio-7bc726088b920bfb77bdc2a473cd5909a52ecad8083221119d64a9581c8fe210 WatchSource:0}: Error finding container 7bc726088b920bfb77bdc2a473cd5909a52ecad8083221119d64a9581c8fe210: Status 404 returned error can't find the container with id 7bc726088b920bfb77bdc2a473cd5909a52ecad8083221119d64a9581c8fe210 Feb 27 19:57:33 crc kubenswrapper[4941]: I0227 19:57:33.557537 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bxr82/must-gather-b5bbv" event={"ID":"0eb6568b-e1db-40af-9a57-f44aa47fd106","Type":"ContainerStarted","Data":"7bc726088b920bfb77bdc2a473cd5909a52ecad8083221119d64a9581c8fe210"} Feb 27 19:57:38 crc kubenswrapper[4941]: I0227 19:57:38.584734 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bxr82/must-gather-b5bbv" event={"ID":"0eb6568b-e1db-40af-9a57-f44aa47fd106","Type":"ContainerStarted","Data":"e769250d42db8010dbf17c20d8b6bed1c62c29f7ac960bbf3269b0da52b57c39"} Feb 27 19:57:38 crc kubenswrapper[4941]: I0227 19:57:38.585398 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bxr82/must-gather-b5bbv" event={"ID":"0eb6568b-e1db-40af-9a57-f44aa47fd106","Type":"ContainerStarted","Data":"f2e0222cc962271ad0394a45a493c0c175fdb64129f6f931e770b0adbb938303"} Feb 27 19:57:38 crc kubenswrapper[4941]: I0227 19:57:38.604784 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bxr82/must-gather-b5bbv" podStartSLOduration=2.098624547 podStartE2EDuration="6.604762538s" podCreationTimestamp="2026-02-27 19:57:32 +0000 UTC" firstStartedPulling="2026-02-27 19:57:33.155307735 +0000 UTC m=+1371.416448155" lastFinishedPulling="2026-02-27 19:57:37.661445696 +0000 UTC m=+1375.922586146" observedRunningTime="2026-02-27 19:57:38.601146345 +0000 UTC m=+1376.862286805" watchObservedRunningTime="2026-02-27 19:57:38.604762538 +0000 UTC m=+1376.865902958" Feb 27 19:57:39 crc kubenswrapper[4941]: E0227 19:57:39.467970 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:57:43 crc kubenswrapper[4941]: I0227 19:57:43.844609 4941 scope.go:117] "RemoveContainer" containerID="f97c11f0e18893095b700ef93ea7a61ad20da519c0b2998f3dee9e9522ee26d7" Feb 27 19:57:47 crc kubenswrapper[4941]: I0227 19:57:47.652371 4941 generic.go:334] "Generic (PLEG): container finished" podID="0a787d75-f193-4f58-833b-337e41627a9d" containerID="42fb4635094d7f24b35c19e1514343f0f246a2778aa544e7a800e3f5766453cc" exitCode=0 Feb 27 19:57:47 crc kubenswrapper[4941]: I0227 19:57:47.652463 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" event={"ID":"0a787d75-f193-4f58-833b-337e41627a9d","Type":"ContainerDied","Data":"42fb4635094d7f24b35c19e1514343f0f246a2778aa544e7a800e3f5766453cc"} Feb 27 19:57:48 crc kubenswrapper[4941]: I0227 19:57:48.895951 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" Feb 27 19:57:48 crc kubenswrapper[4941]: I0227 19:57:48.973492 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vz4d\" (UniqueName: \"kubernetes.io/projected/0a787d75-f193-4f58-833b-337e41627a9d-kube-api-access-6vz4d\") pod \"0a787d75-f193-4f58-833b-337e41627a9d\" (UID: \"0a787d75-f193-4f58-833b-337e41627a9d\") " Feb 27 19:57:48 crc kubenswrapper[4941]: I0227 19:57:48.982692 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a787d75-f193-4f58-833b-337e41627a9d-kube-api-access-6vz4d" (OuterVolumeSpecName: "kube-api-access-6vz4d") pod "0a787d75-f193-4f58-833b-337e41627a9d" (UID: "0a787d75-f193-4f58-833b-337e41627a9d"). InnerVolumeSpecName "kube-api-access-6vz4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:57:49 crc kubenswrapper[4941]: I0227 19:57:49.075743 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vz4d\" (UniqueName: \"kubernetes.io/projected/0a787d75-f193-4f58-833b-337e41627a9d-kube-api-access-6vz4d\") on node \"crc\" DevicePath \"\"" Feb 27 19:57:49 crc kubenswrapper[4941]: I0227 19:57:49.664131 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" event={"ID":"0a787d75-f193-4f58-833b-337e41627a9d","Type":"ContainerDied","Data":"b9d3bd12c716cba932ddd0de0cc30954fb2a8d1576184377cf76d6ffdebff74a"} Feb 27 19:57:49 crc kubenswrapper[4941]: I0227 19:57:49.664181 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537032-pcvsw" Feb 27 19:57:49 crc kubenswrapper[4941]: I0227 19:57:49.664187 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9d3bd12c716cba932ddd0de0cc30954fb2a8d1576184377cf76d6ffdebff74a" Feb 27 19:57:49 crc kubenswrapper[4941]: I0227 19:57:49.958506 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537030-89szz"] Feb 27 19:57:49 crc kubenswrapper[4941]: I0227 19:57:49.961912 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537030-89szz"] Feb 27 19:57:50 crc kubenswrapper[4941]: E0227 19:57:50.469691 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:57:50 crc kubenswrapper[4941]: I0227 19:57:50.474168 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984d38fd-74e0-4716-a788-dadecfa16dc5" path="/var/lib/kubelet/pods/984d38fd-74e0-4716-a788-dadecfa16dc5/volumes" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.145642 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537038-gc75q"] Feb 27 19:58:00 crc kubenswrapper[4941]: E0227 19:58:00.146684 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a787d75-f193-4f58-833b-337e41627a9d" containerName="oc" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.146707 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a787d75-f193-4f58-833b-337e41627a9d" containerName="oc" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.146906 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a787d75-f193-4f58-833b-337e41627a9d" containerName="oc" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.147461 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537038-gc75q" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.150365 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.155455 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.155835 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-dmspt" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.158979 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537038-gc75q"] Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.237191 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97qt\" (UniqueName: \"kubernetes.io/projected/1f719673-f9ad-4583-ad08-6aa8d5676545-kube-api-access-s97qt\") pod \"auto-csr-approver-29537038-gc75q\" (UID: \"1f719673-f9ad-4583-ad08-6aa8d5676545\") " pod="openshift-infra/auto-csr-approver-29537038-gc75q" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.338257 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s97qt\" (UniqueName: \"kubernetes.io/projected/1f719673-f9ad-4583-ad08-6aa8d5676545-kube-api-access-s97qt\") pod \"auto-csr-approver-29537038-gc75q\" (UID: \"1f719673-f9ad-4583-ad08-6aa8d5676545\") " pod="openshift-infra/auto-csr-approver-29537038-gc75q" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.363062 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s97qt\" (UniqueName: \"kubernetes.io/projected/1f719673-f9ad-4583-ad08-6aa8d5676545-kube-api-access-s97qt\") pod \"auto-csr-approver-29537038-gc75q\" (UID: \"1f719673-f9ad-4583-ad08-6aa8d5676545\") " pod="openshift-infra/auto-csr-approver-29537038-gc75q" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.497264 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537038-gc75q" Feb 27 19:58:00 crc kubenswrapper[4941]: I0227 19:58:00.742177 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537038-gc75q"] Feb 27 19:58:01 crc kubenswrapper[4941]: I0227 19:58:01.742907 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537038-gc75q" event={"ID":"1f719673-f9ad-4583-ad08-6aa8d5676545","Type":"ContainerStarted","Data":"a9619a084b7f0c253e6dd2dcb9cac37da845515361a4fa1f676d37350e33ce63"} Feb 27 19:58:02 crc kubenswrapper[4941]: E0227 19:58:02.471517 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:58:02 crc kubenswrapper[4941]: I0227 19:58:02.749906 4941 generic.go:334] "Generic (PLEG): container finished" podID="1f719673-f9ad-4583-ad08-6aa8d5676545" containerID="1a7d71b3d4ef8eb33133e7776c243e8e6bbd2d87bcc40e05ec2164ef9b637dbd" exitCode=0 Feb 27 19:58:02 crc kubenswrapper[4941]: I0227 19:58:02.749979 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537038-gc75q" event={"ID":"1f719673-f9ad-4583-ad08-6aa8d5676545","Type":"ContainerDied","Data":"1a7d71b3d4ef8eb33133e7776c243e8e6bbd2d87bcc40e05ec2164ef9b637dbd"} Feb 27 19:58:03 crc kubenswrapper[4941]: I0227 19:58:03.984897 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537038-gc75q" Feb 27 19:58:04 crc kubenswrapper[4941]: I0227 19:58:04.092354 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s97qt\" (UniqueName: \"kubernetes.io/projected/1f719673-f9ad-4583-ad08-6aa8d5676545-kube-api-access-s97qt\") pod \"1f719673-f9ad-4583-ad08-6aa8d5676545\" (UID: \"1f719673-f9ad-4583-ad08-6aa8d5676545\") " Feb 27 19:58:04 crc kubenswrapper[4941]: I0227 19:58:04.103029 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f719673-f9ad-4583-ad08-6aa8d5676545-kube-api-access-s97qt" (OuterVolumeSpecName: "kube-api-access-s97qt") pod "1f719673-f9ad-4583-ad08-6aa8d5676545" (UID: "1f719673-f9ad-4583-ad08-6aa8d5676545"). InnerVolumeSpecName "kube-api-access-s97qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:58:04 crc kubenswrapper[4941]: I0227 19:58:04.194298 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s97qt\" (UniqueName: \"kubernetes.io/projected/1f719673-f9ad-4583-ad08-6aa8d5676545-kube-api-access-s97qt\") on node \"crc\" DevicePath \"\"" Feb 27 19:58:04 crc kubenswrapper[4941]: I0227 19:58:04.760943 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537038-gc75q" event={"ID":"1f719673-f9ad-4583-ad08-6aa8d5676545","Type":"ContainerDied","Data":"a9619a084b7f0c253e6dd2dcb9cac37da845515361a4fa1f676d37350e33ce63"} Feb 27 19:58:04 crc kubenswrapper[4941]: I0227 19:58:04.760973 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537038-gc75q" Feb 27 19:58:04 crc kubenswrapper[4941]: I0227 19:58:04.760981 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9619a084b7f0c253e6dd2dcb9cac37da845515361a4fa1f676d37350e33ce63" Feb 27 19:58:05 crc kubenswrapper[4941]: I0227 19:58:05.044641 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537032-pcvsw"] Feb 27 19:58:05 crc kubenswrapper[4941]: I0227 19:58:05.053834 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537032-pcvsw"] Feb 27 19:58:06 crc kubenswrapper[4941]: I0227 19:58:06.474177 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a787d75-f193-4f58-833b-337e41627a9d" path="/var/lib/kubelet/pods/0a787d75-f193-4f58-833b-337e41627a9d/volumes" Feb 27 19:58:15 crc kubenswrapper[4941]: I0227 19:58:15.450448 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-52wdw_b70f7996-0cfa-4eb2-896e-49fdaaf5c07a/control-plane-machine-set-operator/0.log" Feb 27 19:58:15 crc kubenswrapper[4941]: I0227 19:58:15.580390 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9s2rr_3db07a98-f6f1-4aa8-9ca9-1989dfc61f04/kube-rbac-proxy/0.log" Feb 27 19:58:15 crc kubenswrapper[4941]: I0227 19:58:15.608942 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9s2rr_3db07a98-f6f1-4aa8-9ca9-1989dfc61f04/machine-api-operator/0.log" Feb 27 19:58:16 crc kubenswrapper[4941]: E0227 19:58:16.469680 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:58:27 crc kubenswrapper[4941]: E0227 19:58:27.470279 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:58:40 crc kubenswrapper[4941]: E0227 19:58:40.469619 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:58:43 crc kubenswrapper[4941]: I0227 19:58:43.897891 4941 scope.go:117] "RemoveContainer" containerID="fd9666a9ea5fc0498e6b42a5197fa32e6c76af3eadd62ec2a08c685aee9041b2" Feb 27 19:58:53 crc kubenswrapper[4941]: I0227 19:58:53.604615 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs7bf_1737ca02-aded-4254-b433-aac4a9ccad71/extract-utilities/0.log" Feb 27 19:58:53 crc kubenswrapper[4941]: I0227 19:58:53.705191 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs7bf_1737ca02-aded-4254-b433-aac4a9ccad71/extract-content/0.log" Feb 27 19:58:53 crc kubenswrapper[4941]: I0227 19:58:53.720736 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs7bf_1737ca02-aded-4254-b433-aac4a9ccad71/extract-utilities/0.log" Feb 27 19:58:53 crc kubenswrapper[4941]: I0227 19:58:53.725091 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs7bf_1737ca02-aded-4254-b433-aac4a9ccad71/extract-content/0.log" Feb 27 19:58:53 crc kubenswrapper[4941]: I0227 19:58:53.853649 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs7bf_1737ca02-aded-4254-b433-aac4a9ccad71/extract-utilities/0.log" Feb 27 19:58:53 crc kubenswrapper[4941]: I0227 19:58:53.859049 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs7bf_1737ca02-aded-4254-b433-aac4a9ccad71/extract-content/0.log" Feb 27 19:58:53 crc kubenswrapper[4941]: I0227 19:58:53.918577 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs7bf_1737ca02-aded-4254-b433-aac4a9ccad71/registry-server/0.log" Feb 27 19:58:53 crc kubenswrapper[4941]: I0227 19:58:53.982652 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f87fb_f053baa0-dc63-462c-921e-385f02bda750/extract-utilities/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.123009 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f87fb_f053baa0-dc63-462c-921e-385f02bda750/extract-utilities/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.145617 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f87fb_f053baa0-dc63-462c-921e-385f02bda750/extract-content/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.159374 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f87fb_f053baa0-dc63-462c-921e-385f02bda750/extract-content/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.299317 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f87fb_f053baa0-dc63-462c-921e-385f02bda750/extract-utilities/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.341444 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f87fb_f053baa0-dc63-462c-921e-385f02bda750/extract-content/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: E0227 19:58:54.468433 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.500628 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttm4p_8d15d663-67b1-48e2-8e06-c4c27858e991/extract-utilities/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.608245 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f87fb_f053baa0-dc63-462c-921e-385f02bda750/registry-server/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.661401 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttm4p_8d15d663-67b1-48e2-8e06-c4c27858e991/extract-utilities/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.851996 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttm4p_8d15d663-67b1-48e2-8e06-c4c27858e991/extract-utilities/0.log" Feb 27 19:58:54 crc kubenswrapper[4941]: I0227 19:58:54.984945 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sl67g_4cae2ecf-4f79-4699-8d3e-e10e965eaa7b/marketplace-operator/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.039886 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vkqp2_048b2614-045b-4bed-89ef-8554c574f3e6/extract-utilities/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.179836 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vkqp2_048b2614-045b-4bed-89ef-8554c574f3e6/extract-content/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.194895 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vkqp2_048b2614-045b-4bed-89ef-8554c574f3e6/extract-utilities/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.228930 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vkqp2_048b2614-045b-4bed-89ef-8554c574f3e6/extract-content/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.349043 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vkqp2_048b2614-045b-4bed-89ef-8554c574f3e6/extract-content/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.371197 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vkqp2_048b2614-045b-4bed-89ef-8554c574f3e6/extract-utilities/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.413277 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vkqp2_048b2614-045b-4bed-89ef-8554c574f3e6/registry-server/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.548713 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b25c9_5b6198dd-a465-4ed8-b4d1-b31c1cf9a266/extract-utilities/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.682245 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b25c9_5b6198dd-a465-4ed8-b4d1-b31c1cf9a266/extract-utilities/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.684195 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b25c9_5b6198dd-a465-4ed8-b4d1-b31c1cf9a266/extract-content/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.731224 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b25c9_5b6198dd-a465-4ed8-b4d1-b31c1cf9a266/extract-content/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.865881 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b25c9_5b6198dd-a465-4ed8-b4d1-b31c1cf9a266/extract-content/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.890637 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b25c9_5b6198dd-a465-4ed8-b4d1-b31c1cf9a266/extract-utilities/0.log" Feb 27 19:58:55 crc kubenswrapper[4941]: I0227 19:58:55.981346 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b25c9_5b6198dd-a465-4ed8-b4d1-b31c1cf9a266/registry-server/0.log" Feb 27 19:59:06 crc kubenswrapper[4941]: E0227 19:59:06.473766 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:59:19 crc kubenswrapper[4941]: E0227 19:59:19.472657 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:59:29 crc kubenswrapper[4941]: I0227 19:59:29.851431 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:59:29 crc kubenswrapper[4941]: I0227 19:59:29.852282 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:59:33 crc kubenswrapper[4941]: E0227 19:59:33.470011 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:59:39 crc kubenswrapper[4941]: I0227 19:59:39.288626 4941 generic.go:334] "Generic (PLEG): container finished" podID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerID="f2e0222cc962271ad0394a45a493c0c175fdb64129f6f931e770b0adbb938303" exitCode=0 Feb 27 19:59:39 crc kubenswrapper[4941]: I0227 19:59:39.288693 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bxr82/must-gather-b5bbv" event={"ID":"0eb6568b-e1db-40af-9a57-f44aa47fd106","Type":"ContainerDied","Data":"f2e0222cc962271ad0394a45a493c0c175fdb64129f6f931e770b0adbb938303"} Feb 27 19:59:39 crc kubenswrapper[4941]: I0227 19:59:39.289484 4941 scope.go:117] "RemoveContainer" containerID="f2e0222cc962271ad0394a45a493c0c175fdb64129f6f931e770b0adbb938303" Feb 27 19:59:40 crc kubenswrapper[4941]: I0227 19:59:40.237219 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bxr82_must-gather-b5bbv_0eb6568b-e1db-40af-9a57-f44aa47fd106/gather/0.log" Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.531623 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bxr82/must-gather-b5bbv"] Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.532330 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bxr82/must-gather-b5bbv" podUID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerName="copy" containerID="cri-o://e769250d42db8010dbf17c20d8b6bed1c62c29f7ac960bbf3269b0da52b57c39" gracePeriod=2 Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.537813 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bxr82/must-gather-b5bbv"] Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.843320 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bxr82_must-gather-b5bbv_0eb6568b-e1db-40af-9a57-f44aa47fd106/copy/0.log" Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.844978 4941 generic.go:334] "Generic (PLEG): container finished" podID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerID="e769250d42db8010dbf17c20d8b6bed1c62c29f7ac960bbf3269b0da52b57c39" exitCode=143 Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.845050 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc726088b920bfb77bdc2a473cd5909a52ecad8083221119d64a9581c8fe210" Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.868321 4941 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bxr82_must-gather-b5bbv_0eb6568b-e1db-40af-9a57-f44aa47fd106/copy/0.log" Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.868725 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.968684 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtf8z\" (UniqueName: \"kubernetes.io/projected/0eb6568b-e1db-40af-9a57-f44aa47fd106-kube-api-access-wtf8z\") pod \"0eb6568b-e1db-40af-9a57-f44aa47fd106\" (UID: \"0eb6568b-e1db-40af-9a57-f44aa47fd106\") " Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.969101 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0eb6568b-e1db-40af-9a57-f44aa47fd106-must-gather-output\") pod \"0eb6568b-e1db-40af-9a57-f44aa47fd106\" (UID: \"0eb6568b-e1db-40af-9a57-f44aa47fd106\") " Feb 27 19:59:46 crc kubenswrapper[4941]: I0227 19:59:46.973746 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb6568b-e1db-40af-9a57-f44aa47fd106-kube-api-access-wtf8z" (OuterVolumeSpecName: "kube-api-access-wtf8z") pod "0eb6568b-e1db-40af-9a57-f44aa47fd106" (UID: "0eb6568b-e1db-40af-9a57-f44aa47fd106"). InnerVolumeSpecName "kube-api-access-wtf8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:59:47 crc kubenswrapper[4941]: I0227 19:59:47.000113 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb6568b-e1db-40af-9a57-f44aa47fd106-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0eb6568b-e1db-40af-9a57-f44aa47fd106" (UID: "0eb6568b-e1db-40af-9a57-f44aa47fd106"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:59:47 crc kubenswrapper[4941]: I0227 19:59:47.070237 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtf8z\" (UniqueName: \"kubernetes.io/projected/0eb6568b-e1db-40af-9a57-f44aa47fd106-kube-api-access-wtf8z\") on node \"crc\" DevicePath \"\"" Feb 27 19:59:47 crc kubenswrapper[4941]: I0227 19:59:47.070276 4941 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0eb6568b-e1db-40af-9a57-f44aa47fd106-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 27 19:59:47 crc kubenswrapper[4941]: I0227 19:59:47.852070 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bxr82/must-gather-b5bbv" Feb 27 19:59:48 crc kubenswrapper[4941]: E0227 19:59:48.470870 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 19:59:48 crc kubenswrapper[4941]: I0227 19:59:48.474735 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb6568b-e1db-40af-9a57-f44aa47fd106" path="/var/lib/kubelet/pods/0eb6568b-e1db-40af-9a57-f44aa47fd106/volumes" Feb 27 19:59:59 crc kubenswrapper[4941]: I0227 19:59:59.851434 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:59:59 crc kubenswrapper[4941]: I0227 19:59:59.852160 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.149249 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537040-46hqw"] Feb 27 20:00:00 crc kubenswrapper[4941]: E0227 20:00:00.149609 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerName="gather" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.149631 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerName="gather" Feb 27 20:00:00 crc kubenswrapper[4941]: E0227 20:00:00.149654 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f719673-f9ad-4583-ad08-6aa8d5676545" containerName="oc" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.149664 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f719673-f9ad-4583-ad08-6aa8d5676545" containerName="oc" Feb 27 20:00:00 crc kubenswrapper[4941]: E0227 20:00:00.149679 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerName="copy" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.149691 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerName="copy" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.149830 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerName="gather" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.149850 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb6568b-e1db-40af-9a57-f44aa47fd106" containerName="copy" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.149868 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f719673-f9ad-4583-ad08-6aa8d5676545" containerName="oc" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.150413 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537040-46hqw" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.152316 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-dmspt" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.154545 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx"] Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.155507 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.156544 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.157249 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.158237 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.159258 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.167998 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537040-46hqw"] Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.174621 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx"] Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.247734 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhssp\" (UniqueName: \"kubernetes.io/projected/672f1185-0346-4dd9-8945-9f18e10740bd-kube-api-access-lhssp\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.247790 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672f1185-0346-4dd9-8945-9f18e10740bd-config-volume\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.247821 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672f1185-0346-4dd9-8945-9f18e10740bd-secret-volume\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.248088 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vxs\" (UniqueName: \"kubernetes.io/projected/1caa4c4f-a95b-4b70-9b6d-7d11b34357cc-kube-api-access-l8vxs\") pod \"auto-csr-approver-29537040-46hqw\" (UID: \"1caa4c4f-a95b-4b70-9b6d-7d11b34357cc\") " pod="openshift-infra/auto-csr-approver-29537040-46hqw" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.349340 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhssp\" (UniqueName: \"kubernetes.io/projected/672f1185-0346-4dd9-8945-9f18e10740bd-kube-api-access-lhssp\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.349436 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672f1185-0346-4dd9-8945-9f18e10740bd-config-volume\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.349514 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672f1185-0346-4dd9-8945-9f18e10740bd-secret-volume\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.349623 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vxs\" (UniqueName: \"kubernetes.io/projected/1caa4c4f-a95b-4b70-9b6d-7d11b34357cc-kube-api-access-l8vxs\") pod \"auto-csr-approver-29537040-46hqw\" (UID: \"1caa4c4f-a95b-4b70-9b6d-7d11b34357cc\") " pod="openshift-infra/auto-csr-approver-29537040-46hqw" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.351583 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672f1185-0346-4dd9-8945-9f18e10740bd-config-volume\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.364215 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672f1185-0346-4dd9-8945-9f18e10740bd-secret-volume\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.366570 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vxs\" (UniqueName: \"kubernetes.io/projected/1caa4c4f-a95b-4b70-9b6d-7d11b34357cc-kube-api-access-l8vxs\") pod \"auto-csr-approver-29537040-46hqw\" (UID: \"1caa4c4f-a95b-4b70-9b6d-7d11b34357cc\") " pod="openshift-infra/auto-csr-approver-29537040-46hqw" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.371569 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhssp\" (UniqueName: \"kubernetes.io/projected/672f1185-0346-4dd9-8945-9f18e10740bd-kube-api-access-lhssp\") pod \"collect-profiles-29537040-4v9mx\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.467238 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537040-46hqw" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.475788 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.912174 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537040-46hqw"] Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.922588 4941 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.928958 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537040-46hqw" event={"ID":"1caa4c4f-a95b-4b70-9b6d-7d11b34357cc","Type":"ContainerStarted","Data":"cdd4b1d8ecbd1e02744f46fb5d9d3879aa5e885b2d87e798b4f1561d06819044"} Feb 27 20:00:00 crc kubenswrapper[4941]: I0227 20:00:00.960000 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx"] Feb 27 20:00:01 crc kubenswrapper[4941]: I0227 20:00:01.938349 4941 generic.go:334] "Generic (PLEG): container finished" podID="672f1185-0346-4dd9-8945-9f18e10740bd" containerID="c658e8ec961876088c1faf1b87434a843c25b6c13ca80af50b51756afa4a5c8c" exitCode=0 Feb 27 20:00:01 crc kubenswrapper[4941]: I0227 20:00:01.938439 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" event={"ID":"672f1185-0346-4dd9-8945-9f18e10740bd","Type":"ContainerDied","Data":"c658e8ec961876088c1faf1b87434a843c25b6c13ca80af50b51756afa4a5c8c"} Feb 27 20:00:01 crc kubenswrapper[4941]: I0227 20:00:01.938771 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" event={"ID":"672f1185-0346-4dd9-8945-9f18e10740bd","Type":"ContainerStarted","Data":"f1ad9ae91aadab1eff4dbc25fbc483c4fffdc21d6b4f7fb5d323d587e36edb0e"} Feb 27 20:00:02 crc kubenswrapper[4941]: E0227 20:00:02.473665 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.201708 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.288055 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhssp\" (UniqueName: \"kubernetes.io/projected/672f1185-0346-4dd9-8945-9f18e10740bd-kube-api-access-lhssp\") pod \"672f1185-0346-4dd9-8945-9f18e10740bd\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.288118 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672f1185-0346-4dd9-8945-9f18e10740bd-secret-volume\") pod \"672f1185-0346-4dd9-8945-9f18e10740bd\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.288169 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672f1185-0346-4dd9-8945-9f18e10740bd-config-volume\") pod \"672f1185-0346-4dd9-8945-9f18e10740bd\" (UID: \"672f1185-0346-4dd9-8945-9f18e10740bd\") " Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.289615 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/672f1185-0346-4dd9-8945-9f18e10740bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "672f1185-0346-4dd9-8945-9f18e10740bd" (UID: "672f1185-0346-4dd9-8945-9f18e10740bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.296466 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/672f1185-0346-4dd9-8945-9f18e10740bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "672f1185-0346-4dd9-8945-9f18e10740bd" (UID: "672f1185-0346-4dd9-8945-9f18e10740bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.296863 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/672f1185-0346-4dd9-8945-9f18e10740bd-kube-api-access-lhssp" (OuterVolumeSpecName: "kube-api-access-lhssp") pod "672f1185-0346-4dd9-8945-9f18e10740bd" (UID: "672f1185-0346-4dd9-8945-9f18e10740bd"). InnerVolumeSpecName "kube-api-access-lhssp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.389963 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhssp\" (UniqueName: \"kubernetes.io/projected/672f1185-0346-4dd9-8945-9f18e10740bd-kube-api-access-lhssp\") on node \"crc\" DevicePath \"\"" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.390016 4941 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672f1185-0346-4dd9-8945-9f18e10740bd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.390035 4941 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672f1185-0346-4dd9-8945-9f18e10740bd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.951273 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" event={"ID":"672f1185-0346-4dd9-8945-9f18e10740bd","Type":"ContainerDied","Data":"f1ad9ae91aadab1eff4dbc25fbc483c4fffdc21d6b4f7fb5d323d587e36edb0e"} Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.951307 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ad9ae91aadab1eff4dbc25fbc483c4fffdc21d6b4f7fb5d323d587e36edb0e" Feb 27 20:00:03 crc kubenswrapper[4941]: I0227 20:00:03.951353 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537040-4v9mx" Feb 27 20:00:04 crc kubenswrapper[4941]: I0227 20:00:04.960390 4941 generic.go:334] "Generic (PLEG): container finished" podID="1caa4c4f-a95b-4b70-9b6d-7d11b34357cc" containerID="d1706516e95456840c032b279cc4b6e4f864eaea8c96376181f240726932c531" exitCode=0 Feb 27 20:00:04 crc kubenswrapper[4941]: I0227 20:00:04.960500 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537040-46hqw" event={"ID":"1caa4c4f-a95b-4b70-9b6d-7d11b34357cc","Type":"ContainerDied","Data":"d1706516e95456840c032b279cc4b6e4f864eaea8c96376181f240726932c531"} Feb 27 20:00:06 crc kubenswrapper[4941]: I0227 20:00:06.210065 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537040-46hqw" Feb 27 20:00:06 crc kubenswrapper[4941]: I0227 20:00:06.334935 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8vxs\" (UniqueName: \"kubernetes.io/projected/1caa4c4f-a95b-4b70-9b6d-7d11b34357cc-kube-api-access-l8vxs\") pod \"1caa4c4f-a95b-4b70-9b6d-7d11b34357cc\" (UID: \"1caa4c4f-a95b-4b70-9b6d-7d11b34357cc\") " Feb 27 20:00:06 crc kubenswrapper[4941]: I0227 20:00:06.340633 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1caa4c4f-a95b-4b70-9b6d-7d11b34357cc-kube-api-access-l8vxs" (OuterVolumeSpecName: "kube-api-access-l8vxs") pod "1caa4c4f-a95b-4b70-9b6d-7d11b34357cc" (UID: "1caa4c4f-a95b-4b70-9b6d-7d11b34357cc"). InnerVolumeSpecName "kube-api-access-l8vxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 20:00:06 crc kubenswrapper[4941]: I0227 20:00:06.436491 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8vxs\" (UniqueName: \"kubernetes.io/projected/1caa4c4f-a95b-4b70-9b6d-7d11b34357cc-kube-api-access-l8vxs\") on node \"crc\" DevicePath \"\"" Feb 27 20:00:06 crc kubenswrapper[4941]: I0227 20:00:06.975105 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537040-46hqw" event={"ID":"1caa4c4f-a95b-4b70-9b6d-7d11b34357cc","Type":"ContainerDied","Data":"cdd4b1d8ecbd1e02744f46fb5d9d3879aa5e885b2d87e798b4f1561d06819044"} Feb 27 20:00:06 crc kubenswrapper[4941]: I0227 20:00:06.975595 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd4b1d8ecbd1e02744f46fb5d9d3879aa5e885b2d87e798b4f1561d06819044" Feb 27 20:00:06 crc kubenswrapper[4941]: I0227 20:00:06.975190 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537040-46hqw" Feb 27 20:00:07 crc kubenswrapper[4941]: I0227 20:00:07.259191 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537034-6hrpc"] Feb 27 20:00:07 crc kubenswrapper[4941]: I0227 20:00:07.263057 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537034-6hrpc"] Feb 27 20:00:08 crc kubenswrapper[4941]: I0227 20:00:08.477529 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91543e1-1473-4e7b-9b73-cac7cc6c36c1" path="/var/lib/kubelet/pods/a91543e1-1473-4e7b-9b73-cac7cc6c36c1/volumes" Feb 27 20:00:16 crc kubenswrapper[4941]: E0227 20:00:16.469677 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 20:00:28 crc kubenswrapper[4941]: E0227 20:00:28.469681 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 20:00:29 crc kubenswrapper[4941]: I0227 20:00:29.850639 4941 patch_prober.go:28] interesting pod/machine-config-daemon-hj7qr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 20:00:29 crc kubenswrapper[4941]: I0227 20:00:29.850711 4941 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 20:00:29 crc kubenswrapper[4941]: I0227 20:00:29.850766 4941 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" Feb 27 20:00:29 crc kubenswrapper[4941]: I0227 20:00:29.851383 4941 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9eec6cc7c7712faabdd73b1b58c5b2509bfe9aa7a15f854471946d93d729953a"} pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 20:00:29 crc kubenswrapper[4941]: I0227 20:00:29.851448 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" podUID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerName="machine-config-daemon" containerID="cri-o://9eec6cc7c7712faabdd73b1b58c5b2509bfe9aa7a15f854471946d93d729953a" gracePeriod=600 Feb 27 20:00:30 crc kubenswrapper[4941]: I0227 20:00:30.137548 4941 generic.go:334] "Generic (PLEG): container finished" podID="1c0b99f5-8424-4e74-a332-f6dff828c48a" containerID="9eec6cc7c7712faabdd73b1b58c5b2509bfe9aa7a15f854471946d93d729953a" exitCode=0 Feb 27 20:00:30 crc kubenswrapper[4941]: I0227 20:00:30.137641 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerDied","Data":"9eec6cc7c7712faabdd73b1b58c5b2509bfe9aa7a15f854471946d93d729953a"} Feb 27 20:00:30 crc kubenswrapper[4941]: I0227 20:00:30.138000 4941 scope.go:117] "RemoveContainer" containerID="9506b549ef944c19a5636653306998dd2e6029783108d9d1a06f5aa2e159a13a" Feb 27 20:00:31 crc kubenswrapper[4941]: I0227 20:00:31.144287 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hj7qr" event={"ID":"1c0b99f5-8424-4e74-a332-f6dff828c48a","Type":"ContainerStarted","Data":"17977b49895d75aa2206e533f6ed0e7892cc1b248ba3da047848b316942b08d5"} Feb 27 20:00:40 crc kubenswrapper[4941]: E0227 20:00:40.470925 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.631708 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lrjkp"] Feb 27 20:00:50 crc kubenswrapper[4941]: E0227 20:00:50.632457 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="672f1185-0346-4dd9-8945-9f18e10740bd" containerName="collect-profiles" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.632490 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="672f1185-0346-4dd9-8945-9f18e10740bd" containerName="collect-profiles" Feb 27 20:00:50 crc kubenswrapper[4941]: E0227 20:00:50.632513 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1caa4c4f-a95b-4b70-9b6d-7d11b34357cc" containerName="oc" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.632520 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="1caa4c4f-a95b-4b70-9b6d-7d11b34357cc" containerName="oc" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.632646 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="672f1185-0346-4dd9-8945-9f18e10740bd" containerName="collect-profiles" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.632669 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="1caa4c4f-a95b-4b70-9b6d-7d11b34357cc" containerName="oc" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.633506 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.649584 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrjkp"] Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.745692 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-utilities\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.745764 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-catalog-content\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.745796 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfz6m\" (UniqueName: \"kubernetes.io/projected/641cc78f-d19b-416c-bc60-a02fe4660c26-kube-api-access-mfz6m\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.847498 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-utilities\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.848007 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-catalog-content\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.848181 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfz6m\" (UniqueName: \"kubernetes.io/projected/641cc78f-d19b-416c-bc60-a02fe4660c26-kube-api-access-mfz6m\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.848269 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-utilities\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.848784 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-catalog-content\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.870897 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfz6m\" (UniqueName: \"kubernetes.io/projected/641cc78f-d19b-416c-bc60-a02fe4660c26-kube-api-access-mfz6m\") pod \"redhat-marketplace-lrjkp\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:50 crc kubenswrapper[4941]: I0227 20:00:50.964096 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:00:51 crc kubenswrapper[4941]: I0227 20:00:51.213321 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrjkp"] Feb 27 20:00:51 crc kubenswrapper[4941]: I0227 20:00:51.256839 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrjkp" event={"ID":"641cc78f-d19b-416c-bc60-a02fe4660c26","Type":"ContainerStarted","Data":"c07f8ca1cda67fc30892e200f710d2e3089d3f900753dd7e2d85022e60d912de"} Feb 27 20:00:52 crc kubenswrapper[4941]: I0227 20:00:52.267950 4941 generic.go:334] "Generic (PLEG): container finished" podID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerID="475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7" exitCode=0 Feb 27 20:00:52 crc kubenswrapper[4941]: I0227 20:00:52.268099 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrjkp" event={"ID":"641cc78f-d19b-416c-bc60-a02fe4660c26","Type":"ContainerDied","Data":"475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7"} Feb 27 20:00:53 crc kubenswrapper[4941]: I0227 20:00:53.279903 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrjkp" event={"ID":"641cc78f-d19b-416c-bc60-a02fe4660c26","Type":"ContainerStarted","Data":"5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a"} Feb 27 20:00:54 crc kubenswrapper[4941]: I0227 20:00:54.288607 4941 generic.go:334] "Generic (PLEG): container finished" podID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerID="5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a" exitCode=0 Feb 27 20:00:54 crc kubenswrapper[4941]: I0227 20:00:54.288699 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrjkp" event={"ID":"641cc78f-d19b-416c-bc60-a02fe4660c26","Type":"ContainerDied","Data":"5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a"} Feb 27 20:00:55 crc kubenswrapper[4941]: I0227 20:00:55.294878 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrjkp" event={"ID":"641cc78f-d19b-416c-bc60-a02fe4660c26","Type":"ContainerStarted","Data":"0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572"} Feb 27 20:00:55 crc kubenswrapper[4941]: I0227 20:00:55.322249 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lrjkp" podStartSLOduration=2.876648107 podStartE2EDuration="5.322217832s" podCreationTimestamp="2026-02-27 20:00:50 +0000 UTC" firstStartedPulling="2026-02-27 20:00:52.270379342 +0000 UTC m=+1570.531519792" lastFinishedPulling="2026-02-27 20:00:54.715949107 +0000 UTC m=+1572.977089517" observedRunningTime="2026-02-27 20:00:55.317937581 +0000 UTC m=+1573.579078051" watchObservedRunningTime="2026-02-27 20:00:55.322217832 +0000 UTC m=+1573.583358332" Feb 27 20:00:55 crc kubenswrapper[4941]: E0227 20:00:55.468238 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 20:01:00 crc kubenswrapper[4941]: I0227 20:01:00.964830 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:01:00 crc kubenswrapper[4941]: I0227 20:01:00.965460 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:01:01 crc kubenswrapper[4941]: I0227 20:01:01.019677 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:01:01 crc kubenswrapper[4941]: I0227 20:01:01.414667 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:01:01 crc kubenswrapper[4941]: I0227 20:01:01.467348 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrjkp"] Feb 27 20:01:03 crc kubenswrapper[4941]: I0227 20:01:03.361902 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lrjkp" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerName="registry-server" containerID="cri-o://0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572" gracePeriod=2 Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.225252 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.333456 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-utilities\") pod \"641cc78f-d19b-416c-bc60-a02fe4660c26\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.333601 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfz6m\" (UniqueName: \"kubernetes.io/projected/641cc78f-d19b-416c-bc60-a02fe4660c26-kube-api-access-mfz6m\") pod \"641cc78f-d19b-416c-bc60-a02fe4660c26\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.333638 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-catalog-content\") pod \"641cc78f-d19b-416c-bc60-a02fe4660c26\" (UID: \"641cc78f-d19b-416c-bc60-a02fe4660c26\") " Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.337166 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-utilities" (OuterVolumeSpecName: "utilities") pod "641cc78f-d19b-416c-bc60-a02fe4660c26" (UID: "641cc78f-d19b-416c-bc60-a02fe4660c26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.349737 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641cc78f-d19b-416c-bc60-a02fe4660c26-kube-api-access-mfz6m" (OuterVolumeSpecName: "kube-api-access-mfz6m") pod "641cc78f-d19b-416c-bc60-a02fe4660c26" (UID: "641cc78f-d19b-416c-bc60-a02fe4660c26"). InnerVolumeSpecName "kube-api-access-mfz6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.359216 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "641cc78f-d19b-416c-bc60-a02fe4660c26" (UID: "641cc78f-d19b-416c-bc60-a02fe4660c26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.369463 4941 generic.go:334] "Generic (PLEG): container finished" podID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerID="0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572" exitCode=0 Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.369539 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrjkp" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.369531 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrjkp" event={"ID":"641cc78f-d19b-416c-bc60-a02fe4660c26","Type":"ContainerDied","Data":"0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572"} Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.369657 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrjkp" event={"ID":"641cc78f-d19b-416c-bc60-a02fe4660c26","Type":"ContainerDied","Data":"c07f8ca1cda67fc30892e200f710d2e3089d3f900753dd7e2d85022e60d912de"} Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.369684 4941 scope.go:117] "RemoveContainer" containerID="0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.400303 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrjkp"] Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.401713 4941 scope.go:117] "RemoveContainer" containerID="5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.403902 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrjkp"] Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.431999 4941 scope.go:117] "RemoveContainer" containerID="475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.435500 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.435547 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfz6m\" (UniqueName: \"kubernetes.io/projected/641cc78f-d19b-416c-bc60-a02fe4660c26-kube-api-access-mfz6m\") on node \"crc\" DevicePath \"\"" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.435581 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641cc78f-d19b-416c-bc60-a02fe4660c26-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.450566 4941 scope.go:117] "RemoveContainer" containerID="0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572" Feb 27 20:01:04 crc kubenswrapper[4941]: E0227 20:01:04.451070 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572\": container with ID starting with 0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572 not found: ID does not exist" containerID="0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.451163 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572"} err="failed to get container status \"0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572\": rpc error: code = NotFound desc = could not find container \"0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572\": container with ID starting with 0c948f9c03e766628292a6e4440d8e37439df17dfaf6fbd55e68bfbd968ff572 not found: ID does not exist" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.451219 4941 scope.go:117] "RemoveContainer" containerID="5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a" Feb 27 20:01:04 crc kubenswrapper[4941]: E0227 20:01:04.451758 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a\": container with ID starting with 5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a not found: ID does not exist" containerID="5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.451816 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a"} err="failed to get container status \"5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a\": rpc error: code = NotFound desc = could not find container \"5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a\": container with ID starting with 5c6becd1f45519127483fdf30a39cab0a77bf0fc25803672945875599e05ff6a not found: ID does not exist" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.451853 4941 scope.go:117] "RemoveContainer" containerID="475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7" Feb 27 20:01:04 crc kubenswrapper[4941]: E0227 20:01:04.452165 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7\": container with ID starting with 475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7 not found: ID does not exist" containerID="475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.452220 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7"} err="failed to get container status \"475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7\": rpc error: code = NotFound desc = could not find container \"475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7\": container with ID starting with 475438a161bfca2eb9e549192a56466dae02c158cd495b397c54263ea81a77c7 not found: ID does not exist" Feb 27 20:01:04 crc kubenswrapper[4941]: I0227 20:01:04.474831 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" path="/var/lib/kubelet/pods/641cc78f-d19b-416c-bc60-a02fe4660c26/volumes" Feb 27 20:01:07 crc kubenswrapper[4941]: E0227 20:01:07.470214 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.439791 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8qlqw"] Feb 27 20:01:10 crc kubenswrapper[4941]: E0227 20:01:10.440436 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerName="extract-content" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.440456 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerName="extract-content" Feb 27 20:01:10 crc kubenswrapper[4941]: E0227 20:01:10.440553 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerName="registry-server" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.440569 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerName="registry-server" Feb 27 20:01:10 crc kubenswrapper[4941]: E0227 20:01:10.440602 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerName="extract-utilities" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.440614 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerName="extract-utilities" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.440777 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="641cc78f-d19b-416c-bc60-a02fe4660c26" containerName="registry-server" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.445672 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.455411 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qlqw"] Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.521720 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-utilities\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.521897 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57rmh\" (UniqueName: \"kubernetes.io/projected/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-kube-api-access-57rmh\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.521949 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-catalog-content\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.623380 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rmh\" (UniqueName: \"kubernetes.io/projected/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-kube-api-access-57rmh\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.623552 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-catalog-content\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.624120 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-catalog-content\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.624261 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-utilities\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.624620 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-utilities\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.646072 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rmh\" (UniqueName: \"kubernetes.io/projected/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-kube-api-access-57rmh\") pod \"certified-operators-8qlqw\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:10 crc kubenswrapper[4941]: I0227 20:01:10.779409 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:11 crc kubenswrapper[4941]: I0227 20:01:11.055142 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qlqw"] Feb 27 20:01:11 crc kubenswrapper[4941]: W0227 20:01:11.063585 4941 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2b7a8e4_6445_445a_9a1c_8f50fdb8fff5.slice/crio-fdc1e19404d9cca2a99caa912175e5ac1c93b14dcc674848db85cdace95b3b0d WatchSource:0}: Error finding container fdc1e19404d9cca2a99caa912175e5ac1c93b14dcc674848db85cdace95b3b0d: Status 404 returned error can't find the container with id fdc1e19404d9cca2a99caa912175e5ac1c93b14dcc674848db85cdace95b3b0d Feb 27 20:01:11 crc kubenswrapper[4941]: I0227 20:01:11.440384 4941 generic.go:334] "Generic (PLEG): container finished" podID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerID="abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588" exitCode=0 Feb 27 20:01:11 crc kubenswrapper[4941]: I0227 20:01:11.440442 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlqw" event={"ID":"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5","Type":"ContainerDied","Data":"abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588"} Feb 27 20:01:11 crc kubenswrapper[4941]: I0227 20:01:11.440512 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlqw" event={"ID":"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5","Type":"ContainerStarted","Data":"fdc1e19404d9cca2a99caa912175e5ac1c93b14dcc674848db85cdace95b3b0d"} Feb 27 20:01:12 crc kubenswrapper[4941]: I0227 20:01:12.451101 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlqw" event={"ID":"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5","Type":"ContainerStarted","Data":"65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd"} Feb 27 20:01:13 crc kubenswrapper[4941]: I0227 20:01:13.461172 4941 generic.go:334] "Generic (PLEG): container finished" podID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerID="65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd" exitCode=0 Feb 27 20:01:13 crc kubenswrapper[4941]: I0227 20:01:13.461235 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlqw" event={"ID":"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5","Type":"ContainerDied","Data":"65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd"} Feb 27 20:01:14 crc kubenswrapper[4941]: I0227 20:01:14.483168 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlqw" event={"ID":"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5","Type":"ContainerStarted","Data":"377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0"} Feb 27 20:01:14 crc kubenswrapper[4941]: I0227 20:01:14.512962 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8qlqw" podStartSLOduration=2.096936382 podStartE2EDuration="4.512934704s" podCreationTimestamp="2026-02-27 20:01:10 +0000 UTC" firstStartedPulling="2026-02-27 20:01:11.442425796 +0000 UTC m=+1589.703566256" lastFinishedPulling="2026-02-27 20:01:13.858424118 +0000 UTC m=+1592.119564578" observedRunningTime="2026-02-27 20:01:14.506617305 +0000 UTC m=+1592.767757805" watchObservedRunningTime="2026-02-27 20:01:14.512934704 +0000 UTC m=+1592.774075144" Feb 27 20:01:20 crc kubenswrapper[4941]: I0227 20:01:20.780242 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:20 crc kubenswrapper[4941]: I0227 20:01:20.780665 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:20 crc kubenswrapper[4941]: I0227 20:01:20.829850 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:21 crc kubenswrapper[4941]: I0227 20:01:21.589415 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:21 crc kubenswrapper[4941]: I0227 20:01:21.836370 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qlqw"] Feb 27 20:01:22 crc kubenswrapper[4941]: E0227 20:01:22.472684 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 20:01:23 crc kubenswrapper[4941]: I0227 20:01:23.539270 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8qlqw" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerName="registry-server" containerID="cri-o://377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0" gracePeriod=2 Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.432794 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.548098 4941 generic.go:334] "Generic (PLEG): container finished" podID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerID="377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0" exitCode=0 Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.548147 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlqw" event={"ID":"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5","Type":"ContainerDied","Data":"377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0"} Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.548196 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qlqw" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.548214 4941 scope.go:117] "RemoveContainer" containerID="377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.548201 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlqw" event={"ID":"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5","Type":"ContainerDied","Data":"fdc1e19404d9cca2a99caa912175e5ac1c93b14dcc674848db85cdace95b3b0d"} Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.570849 4941 scope.go:117] "RemoveContainer" containerID="65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.594197 4941 scope.go:117] "RemoveContainer" containerID="abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.611151 4941 scope.go:117] "RemoveContainer" containerID="377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.611736 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-utilities\") pod \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.611841 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-catalog-content\") pod \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.611942 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57rmh\" (UniqueName: \"kubernetes.io/projected/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-kube-api-access-57rmh\") pod \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\" (UID: \"c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5\") " Feb 27 20:01:24 crc kubenswrapper[4941]: E0227 20:01:24.612707 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0\": container with ID starting with 377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0 not found: ID does not exist" containerID="377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.612734 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-utilities" (OuterVolumeSpecName: "utilities") pod "c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" (UID: "c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.612757 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0"} err="failed to get container status \"377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0\": rpc error: code = NotFound desc = could not find container \"377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0\": container with ID starting with 377426da18f0e992fd675c9792039084c05420422d844f8cfb73327a4af7ffd0 not found: ID does not exist" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.612788 4941 scope.go:117] "RemoveContainer" containerID="65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd" Feb 27 20:01:24 crc kubenswrapper[4941]: E0227 20:01:24.613277 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd\": container with ID starting with 65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd not found: ID does not exist" containerID="65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.613346 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd"} err="failed to get container status \"65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd\": rpc error: code = NotFound desc = could not find container \"65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd\": container with ID starting with 65bc9569151f4088ae9752305c6d4fe22b3c668ec77d55ffb7aefd2287d7c3bd not found: ID does not exist" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.613407 4941 scope.go:117] "RemoveContainer" containerID="abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588" Feb 27 20:01:24 crc kubenswrapper[4941]: E0227 20:01:24.613857 4941 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588\": container with ID starting with abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588 not found: ID does not exist" containerID="abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.613894 4941 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588"} err="failed to get container status \"abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588\": rpc error: code = NotFound desc = could not find container \"abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588\": container with ID starting with abe66ac40a1f371a1cab5ca37e83324ac7e9bc4e10168446acd94487e0a89588 not found: ID does not exist" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.620003 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-kube-api-access-57rmh" (OuterVolumeSpecName: "kube-api-access-57rmh") pod "c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" (UID: "c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5"). InnerVolumeSpecName "kube-api-access-57rmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.697596 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" (UID: "c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.713574 4941 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.713602 4941 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.713615 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57rmh\" (UniqueName: \"kubernetes.io/projected/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5-kube-api-access-57rmh\") on node \"crc\" DevicePath \"\"" Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.900117 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qlqw"] Feb 27 20:01:24 crc kubenswrapper[4941]: I0227 20:01:24.903746 4941 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8qlqw"] Feb 27 20:01:26 crc kubenswrapper[4941]: I0227 20:01:26.479093 4941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" path="/var/lib/kubelet/pods/c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5/volumes" Feb 27 20:01:37 crc kubenswrapper[4941]: E0227 20:01:37.474580 4941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" Feb 27 20:01:43 crc kubenswrapper[4941]: I0227 20:01:43.990835 4941 scope.go:117] "RemoveContainer" containerID="01cc9e5710c2f5e249963a58017145cf7d458e2f591c77c9e61508dd5ebccf90" Feb 27 20:01:50 crc kubenswrapper[4941]: I0227 20:01:50.730594 4941 generic.go:334] "Generic (PLEG): container finished" podID="8d15d663-67b1-48e2-8e06-c4c27858e991" containerID="9d852108dab5d7ae65846a998edd860fcdc8c9535b0a353971bb41089b7f2191" exitCode=0 Feb 27 20:01:50 crc kubenswrapper[4941]: I0227 20:01:50.730684 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttm4p" event={"ID":"8d15d663-67b1-48e2-8e06-c4c27858e991","Type":"ContainerDied","Data":"9d852108dab5d7ae65846a998edd860fcdc8c9535b0a353971bb41089b7f2191"} Feb 27 20:01:51 crc kubenswrapper[4941]: I0227 20:01:51.740397 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttm4p" event={"ID":"8d15d663-67b1-48e2-8e06-c4c27858e991","Type":"ContainerStarted","Data":"cb5a32703dfdd0a51410c2612b486c2ba9f1fb06db65c84ba4d0c098fde74610"} Feb 27 20:01:51 crc kubenswrapper[4941]: I0227 20:01:51.771758 4941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ttm4p" podStartSLOduration=2.457991973 podStartE2EDuration="11m7.771738139s" podCreationTimestamp="2026-02-27 19:50:44 +0000 UTC" firstStartedPulling="2026-02-27 19:50:45.799111824 +0000 UTC m=+964.060252284" lastFinishedPulling="2026-02-27 20:01:51.11285803 +0000 UTC m=+1629.373998450" observedRunningTime="2026-02-27 20:01:51.767338354 +0000 UTC m=+1630.028478834" watchObservedRunningTime="2026-02-27 20:01:51.771738139 +0000 UTC m=+1630.032878579" Feb 27 20:01:54 crc kubenswrapper[4941]: I0227 20:01:54.674593 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ttm4p" Feb 27 20:01:54 crc kubenswrapper[4941]: I0227 20:01:54.675034 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ttm4p" Feb 27 20:01:54 crc kubenswrapper[4941]: I0227 20:01:54.724213 4941 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ttm4p" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.147293 4941 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537042-tmz8h"] Feb 27 20:02:00 crc kubenswrapper[4941]: E0227 20:02:00.149160 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerName="registry-server" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.149191 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerName="registry-server" Feb 27 20:02:00 crc kubenswrapper[4941]: E0227 20:02:00.149210 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerName="extract-utilities" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.149219 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerName="extract-utilities" Feb 27 20:02:00 crc kubenswrapper[4941]: E0227 20:02:00.149235 4941 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerName="extract-content" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.149246 4941 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerName="extract-content" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.149359 4941 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b7a8e4-6445-445a-9a1c-8f50fdb8fff5" containerName="registry-server" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.149804 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537042-tmz8h" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.151459 4941 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-dmspt" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.153519 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.154966 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537042-tmz8h"] Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.155776 4941 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.309838 4941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbr7k\" (UniqueName: \"kubernetes.io/projected/bb178460-a203-4b7d-8bb8-373aa5c96b8f-kube-api-access-wbr7k\") pod \"auto-csr-approver-29537042-tmz8h\" (UID: \"bb178460-a203-4b7d-8bb8-373aa5c96b8f\") " pod="openshift-infra/auto-csr-approver-29537042-tmz8h" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.410831 4941 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbr7k\" (UniqueName: \"kubernetes.io/projected/bb178460-a203-4b7d-8bb8-373aa5c96b8f-kube-api-access-wbr7k\") pod \"auto-csr-approver-29537042-tmz8h\" (UID: \"bb178460-a203-4b7d-8bb8-373aa5c96b8f\") " pod="openshift-infra/auto-csr-approver-29537042-tmz8h" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.429267 4941 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbr7k\" (UniqueName: \"kubernetes.io/projected/bb178460-a203-4b7d-8bb8-373aa5c96b8f-kube-api-access-wbr7k\") pod \"auto-csr-approver-29537042-tmz8h\" (UID: \"bb178460-a203-4b7d-8bb8-373aa5c96b8f\") " pod="openshift-infra/auto-csr-approver-29537042-tmz8h" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.492084 4941 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537042-tmz8h" Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.730414 4941 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537042-tmz8h"] Feb 27 20:02:00 crc kubenswrapper[4941]: I0227 20:02:00.809866 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537042-tmz8h" event={"ID":"bb178460-a203-4b7d-8bb8-373aa5c96b8f","Type":"ContainerStarted","Data":"6b5ffa114a0922e24036fe93dbf46fa4ec8572a1dab22ae3bf1ea3d41078cc07"} Feb 27 20:02:02 crc kubenswrapper[4941]: E0227 20:02:02.375017 4941 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb178460_a203_4b7d_8bb8_373aa5c96b8f.slice/crio-e39809ad4196eec0782ef6779ba34509f567e803f9554763a68e137066a12e4a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb178460_a203_4b7d_8bb8_373aa5c96b8f.slice/crio-conmon-e39809ad4196eec0782ef6779ba34509f567e803f9554763a68e137066a12e4a.scope\": RecentStats: unable to find data in memory cache]" Feb 27 20:02:02 crc kubenswrapper[4941]: I0227 20:02:02.828146 4941 generic.go:334] "Generic (PLEG): container finished" podID="bb178460-a203-4b7d-8bb8-373aa5c96b8f" containerID="e39809ad4196eec0782ef6779ba34509f567e803f9554763a68e137066a12e4a" exitCode=0 Feb 27 20:02:02 crc kubenswrapper[4941]: I0227 20:02:02.828182 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537042-tmz8h" event={"ID":"bb178460-a203-4b7d-8bb8-373aa5c96b8f","Type":"ContainerDied","Data":"e39809ad4196eec0782ef6779ba34509f567e803f9554763a68e137066a12e4a"} Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.122240 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537042-tmz8h" Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.260694 4941 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbr7k\" (UniqueName: \"kubernetes.io/projected/bb178460-a203-4b7d-8bb8-373aa5c96b8f-kube-api-access-wbr7k\") pod \"bb178460-a203-4b7d-8bb8-373aa5c96b8f\" (UID: \"bb178460-a203-4b7d-8bb8-373aa5c96b8f\") " Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.267492 4941 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb178460-a203-4b7d-8bb8-373aa5c96b8f-kube-api-access-wbr7k" (OuterVolumeSpecName: "kube-api-access-wbr7k") pod "bb178460-a203-4b7d-8bb8-373aa5c96b8f" (UID: "bb178460-a203-4b7d-8bb8-373aa5c96b8f"). InnerVolumeSpecName "kube-api-access-wbr7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.361730 4941 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbr7k\" (UniqueName: \"kubernetes.io/projected/bb178460-a203-4b7d-8bb8-373aa5c96b8f-kube-api-access-wbr7k\") on node \"crc\" DevicePath \"\"" Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.752069 4941 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ttm4p" Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.809944 4941 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttm4p"] Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.842654 4941 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537042-tmz8h" event={"ID":"bb178460-a203-4b7d-8bb8-373aa5c96b8f","Type":"ContainerDied","Data":"6b5ffa114a0922e24036fe93dbf46fa4ec8572a1dab22ae3bf1ea3d41078cc07"} Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.842694 4941 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537042-tmz8h" Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.842699 4941 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5ffa114a0922e24036fe93dbf46fa4ec8572a1dab22ae3bf1ea3d41078cc07" Feb 27 20:02:04 crc kubenswrapper[4941]: I0227 20:02:04.842761 4941 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ttm4p" podUID="8d15d663-67b1-48e2-8e06-c4c27858e991" containerName="registry-server" containerID="cri-o://cb5a32703dfdd0a51410c2612b486c2ba9f1fb06db65c84ba4d0c098fde74610" gracePeriod=2 var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515150374106024447 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015150374107017365 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015150370362016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015150370362015457 5ustar corecore